Hello All,
Today I want to talk about Schematic and PCB layout share. Joker Eco-System is not only Open Source, but Open Hardware too, and I want to start with the most ready module first - "Joker TV" (stand alone version). I have started an evaluation of Upverter project for Schematic&PCB sharing. You can open this link (opens in new tab) to observe "Joker TV" Schematic&PCB. Be patient though, as it can take about 2 minutes to load and display the project. After all you should see the PCB layout like on the screenshot below. You can also use the top menu to switch to a schematic view.
"Joker TV" Schematic and PCB
"Joker TV" actual hardware (Rev. 1.0)
License
Schematic and PCB layout shared under Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA 4.0) license. This license allows to remix, transform, and build upon the material for any purpose, even commercially. If you remix, transform, or build upon the material, you must distribute your contributions under the same...

Hello All,
I have used Altera (now Intel) FPGA. Part number EP4CE22F17C8N. For hardware description please check out this post.
Project prepared and compiled in Intel Quartus Prime Lite Edition 16.1. This is the free version and it can be downloaded here.
You can build firmware from source code (check instructions below) or download pre-compiled firmware joker_tv-latest-jic.zip
Building FPGA firmware
Here are the step-by-step instructions for cooking up the firmware:
Checkout the source code from github:
git checkout https://github.com/aospan/joker-tv-fpga
Open the project file joker_tv/joker_tv.qpf in Quartus software.
Choose "Processing->Start Compilation".
Choose "File -> Convert Programming Files -> Open Conversion Setup Data ...". Then choose file joker_tv.cof and press "Generate" button. File joker_tv.jic should be generated.
To generate joker_tv.bin file you should execute joker_tv/generate_bin_fw.sh script.
After compilation you can program firmware into SPI flash on "Joker TV" device. There are two ways to do this.
Programming with JTAG
This method is used by developers or by end-users for debricking device if the firmware is faulty. You need a USB byte-blaster in order to do this. It can...

Hello All,
Today I want to show you how Joker can be used for speech recognition and speech synthesis using neural networks and Joker Empathy module.
I have brewed two docker containers for super simple usage. Just one command required to run neural network and obtain the results. This tutorial should work on any Linux and OSx . No GPU required, only CPU.
This funny video shows voice interaction with Joker:
https://youtu.be/mnw7q0VXYTs
Speech recognition (speech-to-text)
This service based on Kaldi ASR project. Kaldi's 'chain' models (type of DNN-HMM model) used. Actual trained model released by api.ai team. Model contains 127847 words. Compare this number with Oxford English Dictionary which contains 171,476 words or average English-speaking adult knows between 20,000 and 30,000 words. And need to say that this model shows 11.2% word error rate (WER). This is very good results ! "Old" speech recognition methods (GMM-HMM) can show only 21+% WER.
To run test just issue following command in console:
docker run -it aospan/stt
builtin file will be processed and output should contain...

Hello All,
Today I want to show you how Joker with SegNet project can be used for scene understanding (in AI world it's called "semantic segmentation"). We will take picture of our room and AI will show us pixels belongs for different objects (like "table", "chair", etc). This tutorial should work on any Linux distribution (CoreOS, Debian, RedHat, etc). No GPU required, only CPU. I have brewed ready to use docker image, just issue following command in console:
docker run --name segnet --rm -it -v `pwd`/out:/workspace/out aospan/docker-segnet
you should see following output if input images processed successfully:
Grabbed camera frame in 12.1850967407 ms
Resized image in 33.4980487823 ms
Executed SegNet in 11251.4910698 ms
Processed results in 2.71892547607 ms
Input and processed images located in folder ./out. Let's take a look at this images:
Left image is input and right is output annotated image. AI defined what object this pixel related for and mark with different colors.
Now you can prepare your own images and make experiments with AI semantic segmentation. Input images should be named strictly as...

Hello All,
Today I want to show you how Joker can be used for neural network image classification with Caffe project. Caffe is a deep learning framework. This tutorial should work on any Linux distribution (CoreOS, Debian, RedHat, etc).
For simplify overall process I have created Docker container with Caffe (built from sources) and already trained "BVLC CaffeNet Model" (based on ImageNet). Who wants to dive deeper can check training instructions here.
Let’s do some console work. Issue following command to pull and run docker container with required software:
docker run --name ai -p 5000:5000 aospan/caffe-cpu python /opt/caffe/examples/web_demo/app.py
After container startup you can go to web-interface:
http://joker:5000
and check how neural network classify your images. Let's try to check how our neural network can classify images. I will make some photos from my smartphone and will check:
hmm, looks good ! Our neural network found tennis ball on the image (i have marked results by red arrow). Let's try one more:
Bicycle was found ! Try to make photos by...

Hello All,
Today I want to show you how Joker can be used for video decode and encode (transcoding) using Intel GPU (QuickSync technology). For better performance we will use Intel GPU (graphics processing units). In this case CPU is not loaded with video processing tasks. This tutorial should work on any Linux distribution (CoreOS, Debian, RedHat, etc).
Let's do some console work. Issue following command to pull and run docker container with required software:
docker run --privileged --name gstreamer -v /dev:/dev -it aospan/docker-gstreamer-vaapi /bin/bash
grab some coffee and wait. After all you will see following prompt:
root@dbbc5ddfc092:/#
now you can issue command for transcode sample video file:
/opt/transode-file.sh
this command takes sample file from /opt/moscow24.ts and change video codec from MPEG2 (resolution:720x576) to H264 (resolution:720x576) and change audio from MP2 to AAC. It should take about 25-30 seconds to transcode 100 seconds file.
Now you can change /opt/transode-file.sh script and do your own experiments with video transcoding on GPU.
CPU and GPU load monitoring
Open new terminal connection to Joker and issue following command:
docker exec -it...