Hololens V1 – UI Panel for showing message

I wanted something to show a program state message while I’m wearing Hololens. That is to add UI > Panel and UI > Text inside the Panel. You will get The Canvas gameobject as a parent object that contain Panel nested inside. Set the Canvas render mode to “World Space”

Then we need to attach Tagalong and Billboard scripts (from HoloToolkit 2017.4.3.0) to the Canvas.

Tagalong will move the Canvas to always be inside the Hololens view. The default setting it will move the Canvas to the border of the Hololens view, not exactly at the view center.

Billboard will change the Canvas orientation to face the Hololens Camera. Set Pivot axis to “free” so that the Canvas can rotate along all x,y,z axes.

While recording a video, the tagalong function did not work properly.
It let the Canvas to not come inside Hololens view when moved.
Normally, the Canvas will be around inner border of the view.

Hololens V1 – Xbox Controller

Import HoloToolkit 2017.4.3.0
Menu : Mixed Reality Toolkit > Configure > Apply Mixed Reality Project Setting > Check “Use Toolkit-specific InputManager axes”

using System;
using HoloToolkit.Unity.InputModule;


public class XboxControllerInputHandle : XboxControllerHandlerBase
{
    public event Action On_Y_ButtonPressed = delegate { };
    public event Action On_B_ButtonPressed = delegate { };
    public event Action On_X_ButtonPressed = delegate { };

    public override void OnXboxInputUpdate(XboxControllerEventData eventData)
    {
        base.OnXboxInputUpdate(eventData);

        //XboxA_Down overlap with the default select
        if (eventData.XboxY_Down) 
        {
            On_Y_ButtonPressed();
        }
        if (eventData.XboxB_Down)
        {
            On_B_ButtonPressed();
        }
        if (eventData.XboxX_Down)
        {
            On_X_ButtonPressed();
        }
    }
}
using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class DrawLineByPoints : MonoBehaviour
{
    private LineRenderer lr;
    private List<Vector3> points;
    public bool visible { get; private set; }

    private void Awake()
    {
        visible = true;
        lr = GetComponent<LineRenderer>();
        points = new List<Vector3>();
    }

    public void Hide()
    {
        visible = false;
        gameObject.SetActive(false);
    }
    public void Show()
    {
        visible = true;
        gameObject.SetActive(true);
    }

    public void SetUpLine(List<Vector3> points)
    {
        lr.positionCount = points.Count;
        this.points = points;
    }

    private void Update()
    {
        for(int i=0;i<points.Count;i++)
        {
            lr.SetPosition(i,points[i]);
        }
    }
}
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using HoloToolkit.Unity.InputModule;

public class PointAdding : MonoBehaviour
{
    private List<Vector3> positions;
    [SerializeField] private DrawLineByPoints line;
    [SerializeField] private XboxControllerInputHandle XBoxInputHandleScript;

    void Start()
    {
        positions = new List<Vector3>();
        InputManager.Instance.PushFallbackInputHandler(gameObject);

        XBoxInputHandleScript.On_Y_ButtonPressed += Handle_Y_ButtonPressed;
        XBoxInputHandleScript.On_B_ButtonPressed += Handle_B_ButtonPressed;
        XBoxInputHandleScript.On_X_ButtonPressed += Handle_X_ButtonPressed;
    }

    private void Handle_Y_ButtonPressed()
    {
        if(line.visible) line.Hide();
        else line.Show();
    }
    private void Handle_B_ButtonPressed()
    {
        Vector3 hitPoint = GazeManager.Instance.HitPosition;
        positions.Add(hitPoint);
        Debug.Log("Click on " + hitPoint.ToString());
        line.SetUpLine(positions);
    }

    private void Handle_X_ButtonPressed()
    {
        positions.Clear();
        line.SetUpLine(positions);
    }
}

Press B : add new point for drawing a line
Press X : remove all points
Press Y : Show/Hide the line
Note that the A button is a select action by default.

Hololens V1 – Spatial Mapping

Import HoloToolkit 2017.4.3.0
Hierarchy : Add HoloLensCamera, InputManager, DefaultCursor, SpatialMapping
Publishing Setting, Capabilities : Check SpatialPerception

Then we can access environment mesh that the Hololens perceive. (we can select to show or hide mesh in SpatialMapping prefab)

This program will draw a line as we click on room positions.

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class DrawLineByPoints : MonoBehaviour
{
    private LineRenderer lr;
    private List<Vector3> points;

    private void Awake()
    {
        lr = GetComponent<LineRenderer>();
        points = new List<Vector3>();
    }

    public void SetUpLine(List<Vector3> points)
    {
        lr.positionCount = points.Count;
        this.points = points;
    }

    private void Update()
    {
        for(int i=0;i<points.Count;i++)
        {
            lr.SetPosition(i,points[i]);
            //Debug.Log(points[i].position);
        }
    }
}
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using HoloToolkit.Unity.InputModule;

public class PointAdding : MonoBehaviour, IInputClickHandler
{
    [SerializeField] private DrawLineByPoints line;
    private List<Vector3> positions;

    void Start()
    {
        positions = new List<Vector3>();
        InputManager.Instance.PushFallbackInputHandler(gameObject);
    }

    public void OnInputClicked(InputClickedEventData eventData)
    {
        Debug.Log("\nOnInputClicked");
        Vector3 hitPoint = GazeManager.Instance.HitPosition;

        Debug.Log("Click on " + hitPoint.ToString());
        positions.Add(hitPoint);

        line.SetUpLine(positions); 
    }
}

Hololens V1 – See Debug.Log message on PC

I refer from http://talesfromtherift.com/hololens-contest-week-9/

  1. Download SocketTest from http://sourceforge.net/projects/sockettest/ ,
    extract file and open the program. Goto Udp tab, IP = 0.0.0.0, Port = 9999, Press “Start Listening”

2. Attach DebugLogBroadcaster.cs from the above blog to a gameobject, change Broadcast Port to 9999. (Any port number is fine. It only need to match the SocketTest program)

3. In Unity, Publishing Setting :  enable the networking capabilities ( InternetClient, InternetClientServer and PrivateNetworkClientServer)

I’m gonna write a program to detect the airtap event to generate a new object in a room. To use Hololens interface -> import HoloToolkit 2017.4.3.0

Hierarchy : Add HoloLensCamera, InputManager, DefaultCursor

The Yellow star prefab is from https://assetstore.unity.com/packages/3d/props/simple-gems-ultimate-animated-customizable-pack-73764

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using HoloToolkit.Unity.InputModule;

public class SpawnObject : MonoBehaviour, IInputClickHandler
{

    public GameObject iprefab;

    void Start ()
    {
        InputManager.Instance.PushFallbackInputHandler(gameObject);
    }

    public void OnInputClicked(InputClickedEventData eventData)
    {
        Debug.Log("\nOnInputClicked");

        GameObject newobj = Instantiate(iprefab, new Vector3(0, 0, 0), Quaternion.Euler(-90, 0, 0));
        newobj.transform.position = Camera.main.transform.position + Camera.main.transform.forward*10;

        Debug.Log("Camera at " + Camera.main.transform.position);
        Debug.Log("Spawn Object at " + newobj.transform.position);

    }
}

Here is the result: We see a cursor which indicate the Hololens lookat position, when we do airtap, the programe will create a new star at the cursor position. Also the Debug.Log() can be observed by the SocketTest program.

Hololens V1 – Access Windows Device Portal

1. USB connection

Type http://127.0.0.1:10080/ on the Firefox web browser

If this error appear.

Type “about:config“, search for “network.security.ports.banned.override

If it does not exist, create a new one with string as a port number 10080, then we can access http://127.0.0.1:10080/

It will ask username and password of this Hololens. If you don’t know, type anything for 3 times, then it will let you set a new one.

2. Wifi connection

Type the IP shown in the Holographic app on the Firefox web browser.

Note that your pc need to be in the same LAN group IP.
Typing “ipconfig” in command prompt (windows search > “cmd”), to see IP.

A desktop pc with LAN cable connected may not be in the same IP group with wifi connection. Use a usb wifi adapter plug to the PC to access the wifi instead.

Hololens V1 – Starting guide

I didn’t touch Hololens for almost 2 years…then I forgot how to use it… It took several days to recall my memory. So I need to write this out otherwise I will forget again.

I have a Hololens with the current OS is 10.0.17763.2114 (Check it at Settings > About), that is Windows 10 Version 1809 (RS5)


https://en.wikipedia.org/wiki/Windows_10_version_history

First I need Visual Studio 2017 and Unity 2018.2.1f1, the same version I used around 2018-2019.

While install Visual Studio Community, ensure that Windows 10 SDK 10.0.17763.0, Universal Windows Platform (UWP), .NET, USB device connection? are selected.

While install Unity, select UWP Build Support (Both IL2CPP and .NET), Vuforia AR Support. Note that Unity 2018.2.1f1 is the last version that can build app with .NET

New Unity 3D project:

1. Main Camera > Clear Flags >
– Change from Skybox to Solid Color,
– Background = Black color.
2. You may change Field of View, Clipping Planes Near-Far later.
3. Add some object to the scene.
– Right click in Hierarchy > 3D Object > Cube.
– Right click in Assets > Create > Material > name = red_material, change albedo color to red.
– Drag the red_material to the cube, then the cube color will become red
Here we finish set up a sample scene.

Unity Setting:

1. Windows > Package Manager > remove Text Mesh Pro package, it will cause a failure build in Visual Studio.
2. Edit > Project Settings > Quality > Select Very Low level. (may not necessary)
3. File > Build Settings >
– Platform = Universal Windows Platform, then click Switch Platform
– Player Settings >
— XR Setting > Select Virtual Reality Support

— Publishing Setting > Capabilities > Select: InternetClient, Webcam, Microphone, SpatialPerception (any stuff you will use in your app)

— Other Setting > Scripting Backend = .NET
(note that I use .NET because I used it since the Unity5.6. Building with the default IL2CPP is failed for my project, though it is the new standard now.)

— Icon > Tiles and Logos > Square 150×150 Logo > Scale 200% (300×300 pixel), Add icon image for your app here! (Scale 100% did not work)

Player Settings >
– Follow settings in the below image.
– Note that don’t select the Enable Mixed Reality Remoting, it will build the app as a 2D window.

Build an app:

Click Build, then create a new folder “app”, the source code will generated in this folder. Otherwise, all the files to deploy to Hololens will messed up in your unity project folder.

It is also easy to just delete the “app” folder to rebuild the app from the scratch. Or create another folder “app2” to build another version of your app.

If build completed, window explorer will pop up showing “app” folder.
Note that the projectname.sln file outside the “app” folder is a project file for unity, we use it when doing coding together with scene design in unity.
To deploy an app to Hololens, you need to open “app/projectname.sln”

<- open this one

Note that you need to turn on developer mode on both pc (Setting > Security) and Hololens.

Connect the Hololens to pc with usb cable.
For the first time of deployment, you will be asked for pairing Hololens with the pc. Go to get the PIN from Hololens (Settings > Updates & Security > For developers > Device discovery > Pair)

Deploying App:

1. Release mode x86
2. Select Device (it will deploy through usb cable)
3. Select “Start without debug” ( Debug > Start without debug). Get the icon by clicking at the small triangle on the right > Customize > Command > Add Command .

Here is the app deployed to the Hololens.

Install CUDA in ubuntu

We can install CUDA library (for using a GPU) by the following command.

sudo apt install nvidia-cuda-toolkit

however it will install the latest release (CUDA 11) which is not supported by many research projects in machine learning yet.

from: https://docs.nvidia.com/deploy/cuda-compatibility/index.html

To install a specific version of CUDA, check the linux driver version using the command “nvidia-smi”, if not satisfy you need to update the linux driver.
Then download CUDA .deb files from
https://developer.nvidia.com/CUDA-TOOLKIT-ARCHIVE

Installation Instructions:
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin
sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600

wget http://developer.download.nvidia.com/compute/cuda/10.2/Prod/local_installers/cuda-repo-ubuntu1804-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1804-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb
sudo apt-key add /var/cuda-repo-10-2-local-10.2.89-440.33.01/7fa2af80.pub
sudo apt-get update
sudo apt-get -y install cuda-10-2
sudo apt-get -y install cuda    <- need 1 hr++ to download the upgrade packages needed for installationsudo apt-get -y install cuda-10-2
$ wget http://developer.download.nvidia.com/compute/machinelearning/repos/ubuntu1804/x86_64/nvidia-machine-learning-repo-ubuntu1804_1.0.0-
1_amd64.deb
$ sudo apt install ./nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb
$ sudo apt-get update

Then, link the current CUDA version to path environment

#maybe not important to do
export CUDA_HOME=/usr/local/cuda-10.2
export CUDA_ROOT=/usr/local/cuda-10.2
export PATH=$PATH:$CUDA_HOME/bin:$CUDA_HOME/include:$CUDA_HOME
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH:$CUDA_HOME/include
export CUDA_INC_DIR=$CUDA_INC_DIR:$CUDA_HOME:$CUDA_HOME/include

#sudo dpkg -i cuda-repo-ubuntu1804_10.2.89-1_amd64.deb
sudo dpkg -i libcudnn8_8.0.3.33-1+cuda10.2_amd64.deb
sudo dpkg -i libcudnn8-dev_8.0.3.33-1+cuda10.2_amd64.deb
sudo dpkg -i libcudnn8-samples_8.0.3.33-1+cuda10.2_amd64.deb

Download cudnn ( a GPU-accelerated library for deep neuron networks) version that match the CUDA from
https://developer.nvidia.com/rdp/cudnn-archive

Check current version of CUDAcat /usr/local/cuda/version.txt
nvcc –version
Check cudnn versioncat /usr/include/x86_64-linux-gnu/cudnn_v*.h | grep CUDNN_MAJOR -A 2
cat /usr/include/cudnn.h | grep CUDNN_MAJOR -A 2
open bashrc file.gedit .bashrc
reload bashrc, after change it content.source ~/.bashrc
check symbolic link in the folderls -l /usr/local/
create symbolic linkln -s <source_file_directory> <link_file_directory>
print environment variableenv
env | grep -i cuda
rm filename
Useful commands

cat is use for print the detail inside a file.
.bashrc is a file that contains shell configurations and variable of the system.

dpkg is a tool to install, build, remove and manage Debian packages.
-i : install
-p : purge

qt windows deployment

Install Qt Creator 4.11.2
Based on Qt 5.14.2 (MSVC 2017, 32 bit) (64bit?)

ตอนสร้างโปรเจคให้ เลือก qmake ไว้ด้วย

ใน .pro  ใส่ลิงค์ lib ของ opencv

LIBS += C:\opencv3.4.10\build\x64\vc15\lib\opencv_world3410d.lib \
C:\opencv3.4.10\build\x64\vc15\lib\opencv_world3410.lib

ใน project build setting ถ้าเลือก shadow build (เลือกไว้อยู่)  โปรแกรมมันจะไปสร้าง build folder ใหม่ แยกออกจาก project folder

เขียนโค๊ด>run qmake> run build ตามปกติใน qt creator ก็จะได้หน้าต่างโปรแกรม

ข้างในโฟลเดอร์ debug ที่ถูกสร้างขึ้นจากการ build ในตอนแรก

แต่ว่าถ้าเอาโฟลเดอร์นี้ไปเปิดในคอมเครื่องอื่น มันจะขึ้น System error. Qt5Core.dll not found

วิธีแก้

เขียนโฟล์เดอร์ของ qt เพิ่มเข้าไปใน windows environment path

ตอนที่จะสร้างไฟล์สำหรับ deploy

ให้ลบไฟล์อื่นๆ เหลือแค่ โปรแกรมของเรา.exe  แล้วเปิด cmd ในโฟลเดอร์นั้น

พิมพ์ windeployqt.exe –quick .

คำสั่งนี้จะสร้างไฟล์อื่นๆที่เกี่ยวข้องในการรันโปรแกรมมาไว้ในโฟลเดอร์นี้ให้

แต่ว่า .dll ของ opencv ต้อง manual copy จาก C:\opencv3.4.10\build\x64\vc15\bin มาวางไว้ใน build folder เอง

เสร็จแล้วก็ copy ทั้งโฟลเดอร์ไปรันในคอมเครื่องอื่นได้ละ

 

ปล. เจอหนัสือ opencv+qt น่าสนใจ

Computer Vision with OpenCV 3 and Qt5 : build visually appealing, multithreaded, cross-platform computer vision applications

https://github.com/PacktPublishing/Computer-Vision-with-OpenCV-3-and-Qt5

 

Posted in qt

ATR talks note

Robot = special purpose, single task.

Human = tolerant of variability, style matter.

Risk is subjective.

Risk = probability of error * cost of error

Perturbation(การรบกวน) response implements risk-aware reflexion.

Knowing how to respond to the error, make a robot more like a human.

  1. Risk awareness: plan for error
  2. Multiple options: change plan
  3. Flexible behavior : many tasks