... | ... | @@ -43,6 +43,7 @@ Basic understanding about accessibility, Programming |
|
|
|
|
|
## [Internship / Bachelorthesis] Hand gesture control for the Smart Living Lab
|
|
|
**Selected Topics**: AT, StudyATHome:AI, Machine Vision
|
|
|
|
|
|
Write to: <deinhofe@technikum-wien.at>
|
|
|
|
|
|
The [OAK-D camera](https://shop.luxonis.com/collections/all/products/1098obcenclosure) is a depth camera with embedded computer vision and neural inference support. This means that the processsing of a camera frame and the processing on a machine learning model is done on the camera freeing the host CPU (e.g. Raspberry Pi) for other tasks.
|
... | ... | @@ -53,7 +54,7 @@ The OAK-D camera can be programmed by a very simple python API and has a lots of |
|
|
|
|
|
Use hand trackig, as shown in the image, to control items of the Smart Living Lab by interfacing the OpenHAB REST API.
|
|
|
|
|
|
![Hand tracking example](uploads/afe36b46451e0045ac660d5c9d75b0ff/image.png)
|
|
|
![](https://raw.githubusercontent.com/geaxgx/depthai_hand_tracker/main/img/hand_tracker.gif)
|
|
|
[OAK-D hand tracking example](https://github.com/geaxgx/depthai_hand_tracker)
|
|
|
|
|
|
### Links
|
... | ... | @@ -70,7 +71,7 @@ In computer vision facial landmark algorithms are capable of finding nose, mouth |
|
|
|
|
|
Google provides a very simplified library (mediapipe) tracking facial landmarks in real time. It can be used to control various applications or items. There are many easy youtube tutorials of how to use it.
|
|
|
|
|
|
![Facial landmark tracking](https://google.github.io/mediapipe/images/mobile/face_mesh_android_gpu.gif)
|
|
|
![](https://google.github.io/mediapipe/images/mobile/face_mesh_android_gpu.gif)
|
|
|
|
|
|
Tasks of the topic could be:
|
|
|
* Facial gesture based music program
|
... | ... | |