Mask R-CNN

ช่วงนี้กำลังแกะรอย Mask R-CNN Architecture พอดีต้องพรีเซนต์งาน แต่ยัง งงๆกับโครงสร้างข้างในอยู่

https://www.slideshare.net/windmdk/mask-rcnn

เริ่มจากรูปนี้ที่เห็นแพร่หลาย ซึ่งไม่ได้มาจากคนเขียนเปเปอร์ แต่มาจากคนที่ทำ slide รีวิวงานนี้อีกที รูปนี้ทำมาดูเข้าใจง่าย ก็คือ Mask R-CNN เป็น patch เสริม (เพิ่ม Segmentation) ของ Faster R-CNN (ที่ทำแค่ Object detection)

หลักๆแบ่งเป็น 4 ส่วน
1. CNN เอาไว้ extract features จากรูป input ได้เป็น feature map ออกมา
2. RPN เอาไว้ propose region ส่วนที่น่าจะเป็น object
3. RoIAlign เอาไว้ resize feature map (a region ที่อยู่ใน feature map) ให้มี size เท่ากันให้หมด (กำหนด size ล่วงหน้าไว้ในconfig)
เพราะการ train network เราจะ feed input ที่เป็น tensor (array หลายมิติ) เข้าไปขนาดมันเลยต้องแปลงให้เท่ากันจะได้ใส่ใน tensor ได้
4. Head : Classification / Object detection/ Mask prediction

ในส่วนแรกที่เป็น Feature extraction เราจะเลือกโครงสร้างได้ 2 แบบคือ Conv network ธรรมดา จะใช้ VGG หรือ Resnet เป็น Backbone ก็ได้ กับอีกแบบคือ เสริม Feature Pyramid Network (FPN) เข้าไป ซึ่งจะช่วงให้ learn object เล็กๆได้

โค๊ด Mask R-CNN ต้นฉบับเป็นของ Facebook เขียนด้วย pytorch
แต่เรายังไม่เคยเขียน pytorch เลยเลือกใช้โค๊ดของ Matterport ที่เขียนด้วย keras
คิดว่า Architecture ข้างในของทั้ง 2 version คงไม่เหมือนกัน100%
ตัวที่ใช้อยู่นี้ backbone เป็น Resnet101 with FPN


เวลาเรียกฟังก์ชันสร้างโมเดล Mask R-CNN แล้ว เราสามารถ save model นั้นลงไฟล์

keras.models.save_model(model.keras_model,"mask_rcnn.hdf5")

เอาไฟล์ ไปเปิดในโปรแกรม Netron ก็จะเห็นโครงสร้างอันยาวเหยียด

หรือจะ plot model แล้ว save รูปดูก็ได้

#!conda install pydot -y 
#!conda install python-graphviz -y
from keras.utils import plot_model
plot_model(model, show_shapes=True, to_file='model.png')

ก็จะได้กราฟยาวเหยียดคล้ายๆกัน
คือในส่วนหลังจาก Feature extraction แล้ว จะมีการเรียก class พวก ProposalLayer, PyramidROIAlign, DetectionLayer ทำให้กราฟที่ได้จาก Netron กับ plot_model() จะมีความแตกต่างกันนิดหน่อย

กราฟของ model นี้ ถ้าเริ่ม train model แล้ว ก็สามารถไปดูได้ใน tensorboard ได้เหมือนกัน

แล้วก็ ในไฟล์ inspect_model.ipynb จะมีตัวอย่างการใช้ฟังก์ชัน run_graph() ซึ่งช่วยให้เรารู้ขนาดของ output ตอนที่ออกมาจากแต่ละ layer ได้ + จะใช้ display_images() โชว์หน้าตาของ ouput ณ ตอนนั้น มาดูก็ได้

# Get activations of a few sample layers
activations = model.run_graph([image], [
    ("input_image_meta",        tf.identity(model.keras_model.get_layer("input_image_meta").output)), 
    ("rpn_bbox",           model.keras_model.get_layer("rpn_bbox").output),
    ("fpn_p6",     model.keras_model.get_layer("fpn_p6").output),
])
input_image_meta         shape: (1, 19)               min:    0.00000  max: 1024.00000  float32
rpn_bbox                 shape: (1, 261888, 4)        min:  -18.46270  max:   71.82899  float32
fpn_p6                   shape: (1, 16, 16, 256)      min:  -19.04598  max:   21.66270  float32
tp_layer = np.transpose(activations["fpn_p6"][0,:,:,:], [2, 0, 1]) #order
display_images(tp_layer, cols=64)
Feature map/Activation map ที่ออกมาจาก layer fpn_p6 ขนาด 16x16x256

อย่างไรก็ตาม ก็ยังประติดประต่อ โครงสร้างของ Mask R-CNN ไม่ได้อยู่ดี เลยต้องเขียนแผนผัง layer ขึ้นมาเองอีกรอบ (ใช้โปรแกรม drawio)

ภาพบนแสดง layers ในส่วน Feature extraction ที่เป็น Resnet101 + FPN
ภาพล่างแสดงส่วนที่เป็น RPN, ROIAlign, Classification, Object detection (box regression), Mask prediction

Hololens V1 – Falling star

Each airtap do: create object, add rigidbody (gravity) to the object, and finally delete the object.

Somehow, the GazeManager.Instance.HitObject hit the child obj that contain a mesh/box collider. Therefore, we need to check tag name on its parent.
Note that, Mesh Collider needs to check on Convex, otherwise when add a rigidbody, it will fall down eternity without colliding with the room floor.

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using HoloToolkit.Unity.InputModule;

public class SpawnDropObject : MonoBehaviour, IInputClickHandler
{
    //[SerializeField] private PanelDebug panelDebug;
    public GameObject iprefab;
    private int ObjCount;
    private List<GameObject> ObjList;
    void Start()
    {
        InputManager.Instance.PushFallbackInputHandler(gameObject);
        ObjList = new List<GameObject>();
    }

    public void OnInputClicked(InputClickedEventData eventData)
    {
        if (!GazeManager.Instance.HitObject) //airtap at nothing = create new obj
        {
            Debug.Log("!GazeManager.Instance.HitObject");
            Vector3 obj_position = Camera.main.transform.position + Camera.main.transform.forward;
            CreateNewObject(obj_position);
        }
        else
        {
            //panelDebug.ShowMessage(GazeManager.Instance.HitObject.name);
            Debug.Log("\n"+GazeManager.Instance.HitObject.name + " " + GazeManager.Instance.HitObject.tag + " " + GazeManager.Instance.HitObject.transform.parent.tag);

            // Airtap at floating object. Then, add gravity to the obj = drop it down
            if (GazeManager.Instance.HitObject.tag == "Floating") 
            {
                Debug.Log("HitObject.tag == Floating");
                GazeManager.Instance.HitObject.AddComponent<Rigidbody>();  
                GazeManager.Instance.HitObject.tag = "Falldown";
            }
            else if (GazeManager.Instance.HitObject.transform.parent.tag == "Floating")
            {
                Debug.Log("HitObject.parent.tag == Floating");
                GazeManager.Instance.HitObject.AddComponent<Rigidbody>(); 
                GazeManager.Instance.HitObject.transform.parent.tag = "Falldown";
            }

            // Airtap at object on floor. Then, remove it.
            else if (GazeManager.Instance.HitObject.tag == "Falldown") 
            {
                Debug.Log("HitObject.tag == Falldown");
                ObjList.Remove(GazeManager.Instance.HitObject);
                Destroy(GazeManager.Instance.HitObject);
            }
            else if (GazeManager.Instance.HitObject.transform.parent.tag == "Falldown") 
            {
                Debug.Log("HitObject.parent.tag == Falldown");
                ObjList.Remove(GazeManager.Instance.HitObject.transform.parent.gameObject);
                Destroy(GazeManager.Instance.HitObject.transform.parent.gameObject);
            }

            // Airtap at something (room mesh). Then, create new obj.
            else 
            {
                Debug.Log("HitObject.tag == ??");
                Debug.Log("HitObject" + GazeManager.Instance.HitObject.transform.position.ToString());
                Debug.Log("HitPosition" + GazeManager.Instance.HitObject.ToString());
                //CreateNewObject(GazeManager.Instance.HitObject.transform.position); // this position is not the world coordinate
                CreateNewObject(GazeManager.Instance.HitPosition);
            }
        }

        
        string objlistname = "\nObj in List";
        foreach (GameObject obj in ObjList)
        {
            string text = "\n"+obj.name + " " + obj.tag;
            objlistname += text;         
        }
        Debug.Log(objlistname);
    }
    private void CreateNewObject(Vector3 position)
    {
        Debug.Log("CreateNewObject at"+ position.ToString());
        GameObject newobj = Instantiate(iprefab, position, Quaternion.identity);
        newobj.tag = "Floating";

        ObjList.Add(newobj);
    }
}

Hololens V1 – Save text file

This application will record Hololens transformation overtime and save to any text file (my code is csv file) inside the device.

using UnityEngine;
using System.Collections;
using System.Collections.Generic;
using System;
using System.IO;
using System.Linq;

#if WINDOWS_UWP
using Windows.Storage;
using Windows.System;
using System.Threading.Tasks;
using Windows.Storage.Streams;
#endif

// saved folder : User Folders \ LocalAppData \ Appname \ LocalState \
public class WriteCSVFile : MonoBehaviour
{
#if WINDOWS_UWP
    Windows.Storage.ApplicationDataContainer localSettings = Windows.Storage.ApplicationData.Current.LocalSettings;
    Windows.Storage.StorageFolder localFolder = Windows.Storage.ApplicationData.Current.LocalFolder;
#endif

    private string timeStamp;
    private string fileName;
    private string saveInformation;
 
    public void WriteTextToFile(string text)
    {
        timeStamp = System.DateTime.Now.ToString().Replace("/", "_").Replace(":", "-").Replace(" ", "_");
        fileName = "transform-" + timeStamp + ".csv";
        saveInformation = text;
        
#if WINDOWS_UWP
        WriteData();
#endif
    }

#if WINDOWS_UWP
    async void WriteData()
    {
        StorageFile saveFile = await localFolder.CreateFileAsync(fileName, CreationCollisionOption.ReplaceExisting);
        string fileheader = "id,pos_x,pos_y,pos_z,rot_x,rot_y,rot_z" + "\r\n";
        await FileIO.AppendTextAsync(saveFile, fileheader + saveInformation);
    }
#endif

}

In another script file, we record transform data and call WriteTextToFile() to write a text.

[SerializeField] private WriteCSVFile writeCSVFile;
private StringBuilder csv;

private void PreparePositiontosave()
{
    csv.Remove(0, csv.Length);
    for (int i = 0; i < positions.Count; i++)
    {
        var newLine = string.Format("{0},{1},{2},{3},{4},{5},{6}", i,
                        positions[i].x, positions[i].y, positions[i].z,
                        rotations[i].x, rotations[i].y, rotations[i].z);
        csv.AppendLine(newLine);
        writeCSVFile.WriteTextToFile(csv.ToString());
    }
} 

Here is where the file was saved and its content.

Hololens V1 – UI Panel for showing message

I wanted something to show a program state message while I’m wearing Hololens. That is to add UI > Panel and UI > Text inside the Panel. You will get The Canvas gameobject as a parent object that contain Panel nested inside. Set the Canvas render mode to “World Space”

Then we need to attach Tagalong and Billboard scripts (from HoloToolkit 2017.4.3.0) to the Canvas.

Tagalong will move the Canvas to always be inside the Hololens view. The default setting it will move the Canvas to the border of the Hololens view, not exactly at the view center.

Billboard will change the Canvas orientation to face the Hololens Camera. Set Pivot axis to “free” so that the Canvas can rotate along all x,y,z axes.

While recording a video, the tagalong function did not work properly.
It let the Canvas to not come inside Hololens view when moved.
Normally, the Canvas will be around inner border of the view.

Hololens V1 – Xbox Controller

Import HoloToolkit 2017.4.3.0
Menu : Mixed Reality Toolkit > Configure > Apply Mixed Reality Project Setting > Check “Use Toolkit-specific InputManager axes”

using System;
using HoloToolkit.Unity.InputModule;


public class XboxControllerInputHandle : XboxControllerHandlerBase
{
    public event Action On_Y_ButtonPressed = delegate { };
    public event Action On_B_ButtonPressed = delegate { };
    public event Action On_X_ButtonPressed = delegate { };

    public override void OnXboxInputUpdate(XboxControllerEventData eventData)
    {
        base.OnXboxInputUpdate(eventData);

        //XboxA_Down overlap with the default select
        if (eventData.XboxY_Down) 
        {
            On_Y_ButtonPressed();
        }
        if (eventData.XboxB_Down)
        {
            On_B_ButtonPressed();
        }
        if (eventData.XboxX_Down)
        {
            On_X_ButtonPressed();
        }
    }
}
using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class DrawLineByPoints : MonoBehaviour
{
    private LineRenderer lr;
    private List<Vector3> points;
    public bool visible { get; private set; }

    private void Awake()
    {
        visible = true;
        lr = GetComponent<LineRenderer>();
        points = new List<Vector3>();
    }

    public void Hide()
    {
        visible = false;
        gameObject.SetActive(false);
    }
    public void Show()
    {
        visible = true;
        gameObject.SetActive(true);
    }

    public void SetUpLine(List<Vector3> points)
    {
        lr.positionCount = points.Count;
        this.points = points;
    }

    private void Update()
    {
        for(int i=0;i<points.Count;i++)
        {
            lr.SetPosition(i,points[i]);
        }
    }
}
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using HoloToolkit.Unity.InputModule;

public class PointAdding : MonoBehaviour
{
    private List<Vector3> positions;
    [SerializeField] private DrawLineByPoints line;
    [SerializeField] private XboxControllerInputHandle XBoxInputHandleScript;

    void Start()
    {
        positions = new List<Vector3>();
        InputManager.Instance.PushFallbackInputHandler(gameObject);

        XBoxInputHandleScript.On_Y_ButtonPressed += Handle_Y_ButtonPressed;
        XBoxInputHandleScript.On_B_ButtonPressed += Handle_B_ButtonPressed;
        XBoxInputHandleScript.On_X_ButtonPressed += Handle_X_ButtonPressed;
    }

    private void Handle_Y_ButtonPressed()
    {
        if(line.visible) line.Hide();
        else line.Show();
    }
    private void Handle_B_ButtonPressed()
    {
        Vector3 hitPoint = GazeManager.Instance.HitPosition;
        positions.Add(hitPoint);
        Debug.Log("Click on " + hitPoint.ToString());
        line.SetUpLine(positions);
    }

    private void Handle_X_ButtonPressed()
    {
        positions.Clear();
        line.SetUpLine(positions);
    }
}

Press B : add new point for drawing a line
Press X : remove all points
Press Y : Show/Hide the line
Note that the A button is a select action by default.

Hololens V1 – Spatial Mapping

Import HoloToolkit 2017.4.3.0
Hierarchy : Add HoloLensCamera, InputManager, DefaultCursor, SpatialMapping
Publishing Setting, Capabilities : Check SpatialPerception

Then we can access environment mesh that the Hololens perceive. (we can select to show or hide mesh in SpatialMapping prefab)

This program will draw a line as we click on room positions.

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class DrawLineByPoints : MonoBehaviour
{
    private LineRenderer lr;
    private List<Vector3> points;

    private void Awake()
    {
        lr = GetComponent<LineRenderer>();
        points = new List<Vector3>();
    }

    public void SetUpLine(List<Vector3> points)
    {
        lr.positionCount = points.Count;
        this.points = points;
    }

    private void Update()
    {
        for(int i=0;i<points.Count;i++)
        {
            lr.SetPosition(i,points[i]);
            //Debug.Log(points[i].position);
        }
    }
}
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using HoloToolkit.Unity.InputModule;

public class PointAdding : MonoBehaviour, IInputClickHandler
{
    [SerializeField] private DrawLineByPoints line;
    private List<Vector3> positions;

    void Start()
    {
        positions = new List<Vector3>();
        InputManager.Instance.PushFallbackInputHandler(gameObject);
    }

    public void OnInputClicked(InputClickedEventData eventData)
    {
        Debug.Log("\nOnInputClicked");
        Vector3 hitPoint = GazeManager.Instance.HitPosition;

        Debug.Log("Click on " + hitPoint.ToString());
        positions.Add(hitPoint);

        line.SetUpLine(positions); 
    }
}

Hololens V1 – See Debug.Log message on PC

I refer from http://talesfromtherift.com/hololens-contest-week-9/

  1. Download SocketTest from http://sourceforge.net/projects/sockettest/ ,
    extract file and open the program. Goto Udp tab, IP = 0.0.0.0, Port = 9999, Press “Start Listening”

2. Attach DebugLogBroadcaster.cs from the above blog to a gameobject, change Broadcast Port to 9999. (Any port number is fine. It only need to match the SocketTest program)

3. In Unity, Publishing Setting :  enable the networking capabilities ( InternetClient, InternetClientServer and PrivateNetworkClientServer)

I’m gonna write a program to detect the airtap event to generate a new object in a room. To use Hololens interface -> import HoloToolkit 2017.4.3.0

Hierarchy : Add HoloLensCamera, InputManager, DefaultCursor

The Yellow star prefab is from https://assetstore.unity.com/packages/3d/props/simple-gems-ultimate-animated-customizable-pack-73764

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using HoloToolkit.Unity.InputModule;

public class SpawnObject : MonoBehaviour, IInputClickHandler
{

    public GameObject iprefab;

    void Start ()
    {
        InputManager.Instance.PushFallbackInputHandler(gameObject);
    }

    public void OnInputClicked(InputClickedEventData eventData)
    {
        Debug.Log("\nOnInputClicked");

        GameObject newobj = Instantiate(iprefab, new Vector3(0, 0, 0), Quaternion.Euler(-90, 0, 0));
        newobj.transform.position = Camera.main.transform.position + Camera.main.transform.forward*10;

        Debug.Log("Camera at " + Camera.main.transform.position);
        Debug.Log("Spawn Object at " + newobj.transform.position);

    }
}

Here is the result: We see a cursor which indicate the Hololens lookat position, when we do airtap, the programe will create a new star at the cursor position. Also the Debug.Log() can be observed by the SocketTest program.

Hololens V1 – Access Windows Device Portal

1. USB connection

Type http://127.0.0.1:10080/ on the Firefox web browser

If this error appear.

Type “about:config“, search for “network.security.ports.banned.override

If it does not exist, create a new one with string as a port number 10080, then we can access http://127.0.0.1:10080/

It will ask username and password of this Hololens. If you don’t know, type anything for 3 times, then it will let you set a new one.

2. Wifi connection

Type the IP shown in the Holographic app on the Firefox web browser.

Note that your pc need to be in the same LAN group IP.
Typing “ipconfig” in command prompt (windows search > “cmd”), to see IP.

A desktop pc with LAN cable connected may not be in the same IP group with wifi connection. Use a usb wifi adapter plug to the PC to access the wifi instead.

Hololens V1 – Starting guide

I didn’t touch Hololens for almost 2 years…then I forgot how to use it… It took several days to recall my memory. So I need to write this out otherwise I will forget again.

I have a Hololens with the current OS is 10.0.17763.2114 (Check it at Settings > About), that is Windows 10 Version 1809 (RS5)


https://en.wikipedia.org/wiki/Windows_10_version_history

First I need Visual Studio 2017 and Unity 2018.2.1f1, the same version I used around 2018-2019.

While install Visual Studio Community, ensure that Windows 10 SDK 10.0.17763.0, Universal Windows Platform (UWP), .NET, USB device connection? are selected.

While install Unity, select UWP Build Support (Both IL2CPP and .NET), Vuforia AR Support. Note that Unity 2018.2.1f1 is the last version that can build app with .NET

New Unity 3D project:

1. Main Camera > Clear Flags >
– Change from Skybox to Solid Color,
– Background = Black color.
2. You may change Field of View, Clipping Planes Near-Far later.
3. Add some object to the scene.
– Right click in Hierarchy > 3D Object > Cube.
– Right click in Assets > Create > Material > name = red_material, change albedo color to red.
– Drag the red_material to the cube, then the cube color will become red
Here we finish set up a sample scene.

Unity Setting:

1. Windows > Package Manager > remove Text Mesh Pro package, it will cause a failure build in Visual Studio.
2. Edit > Project Settings > Quality > Select Very Low level. (may not necessary)
3. File > Build Settings >
– Platform = Universal Windows Platform, then click Switch Platform
– Player Settings >
— XR Setting > Select Virtual Reality Support

— Publishing Setting > Capabilities > Select: InternetClient, Webcam, Microphone, SpatialPerception (any stuff you will use in your app)

— Other Setting > Scripting Backend = .NET
(note that I use .NET because I used it since the Unity5.6. Building with the default IL2CPP is failed for my project, though it is the new standard now.)

— Icon > Tiles and Logos > Square 150×150 Logo > Scale 200% (300×300 pixel), Add icon image for your app here! (Scale 100% did not work)

Player Settings >
– Follow settings in the below image.
– Note that don’t select the Enable Mixed Reality Remoting, it will build the app as a 2D window.

Build an app:

Click Build, then create a new folder “app”, the source code will generated in this folder. Otherwise, all the files to deploy to Hololens will messed up in your unity project folder.

It is also easy to just delete the “app” folder to rebuild the app from the scratch. Or create another folder “app2” to build another version of your app.

If build completed, window explorer will pop up showing “app” folder.
Note that the projectname.sln file outside the “app” folder is a project file for unity, we use it when doing coding together with scene design in unity.
To deploy an app to Hololens, you need to open “app/projectname.sln”

<- open this one

Note that you need to turn on developer mode on both pc (Setting > Security) and Hololens.

Connect the Hololens to pc with usb cable.
For the first time of deployment, you will be asked for pairing Hololens with the pc. Go to get the PIN from Hololens (Settings > Updates & Security > For developers > Device discovery > Pair)

Deploying App:

1. Release mode x86
2. Select Device (it will deploy through usb cable)
3. Select “Start without debug” ( Debug > Start without debug). Get the icon by clicking at the small triangle on the right > Customize > Command > Add Command .

Here is the app deployed to the Hololens.

Install CUDA in ubuntu

We can install CUDA library (for using a GPU) by the following command.

sudo apt install nvidia-cuda-toolkit

however it will install the latest release (CUDA 11) which is not supported by many research projects in machine learning yet.

from: https://docs.nvidia.com/deploy/cuda-compatibility/index.html

To install a specific version of CUDA, check the linux driver version using the command “nvidia-smi”, if not satisfy you need to update the linux driver.
Then download CUDA .deb files from
https://developer.nvidia.com/CUDA-TOOLKIT-ARCHIVE

Installation Instructions:
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin
sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600

wget http://developer.download.nvidia.com/compute/cuda/10.2/Prod/local_installers/cuda-repo-ubuntu1804-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1804-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb
sudo apt-key add /var/cuda-repo-10-2-local-10.2.89-440.33.01/7fa2af80.pub
sudo apt-get update
sudo apt-get -y install cuda-10-2
sudo apt-get -y install cuda    <- need 1 hr++ to download the upgrade packages needed for installationsudo apt-get -y install cuda-10-2
$ wget http://developer.download.nvidia.com/compute/machinelearning/repos/ubuntu1804/x86_64/nvidia-machine-learning-repo-ubuntu1804_1.0.0-
1_amd64.deb
$ sudo apt install ./nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb
$ sudo apt-get update

Then, link the current CUDA version to path environment

#maybe not important to do
export CUDA_HOME=/usr/local/cuda-10.2
export CUDA_ROOT=/usr/local/cuda-10.2
export PATH=$PATH:$CUDA_HOME/bin:$CUDA_HOME/include:$CUDA_HOME
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH:$CUDA_HOME/include
export CUDA_INC_DIR=$CUDA_INC_DIR:$CUDA_HOME:$CUDA_HOME/include

#sudo dpkg -i cuda-repo-ubuntu1804_10.2.89-1_amd64.deb
sudo dpkg -i libcudnn8_8.0.3.33-1+cuda10.2_amd64.deb
sudo dpkg -i libcudnn8-dev_8.0.3.33-1+cuda10.2_amd64.deb
sudo dpkg -i libcudnn8-samples_8.0.3.33-1+cuda10.2_amd64.deb

Download cudnn ( a GPU-accelerated library for deep neuron networks) version that match the CUDA from
https://developer.nvidia.com/rdp/cudnn-archive

Check current version of CUDAcat /usr/local/cuda/version.txt
nvcc –version
Check cudnn versioncat /usr/include/x86_64-linux-gnu/cudnn_v*.h | grep CUDNN_MAJOR -A 2
cat /usr/include/cudnn.h | grep CUDNN_MAJOR -A 2
open bashrc file.gedit .bashrc
reload bashrc, after change it content.source ~/.bashrc
check symbolic link in the folderls -l /usr/local/
create symbolic linkln -s <source_file_directory> <link_file_directory>
print environment variableenv
env | grep -i cuda
rm filename
Useful commands

cat is use for print the detail inside a file.
.bashrc is a file that contains shell configurations and variable of the system.

dpkg is a tool to install, build, remove and manage Debian packages.
-i : install
-p : purge