December 2015


My favourite

  • Opencv tutorial people detection
  • Head people cascade download
  • Opencv tutorial optical flow
  • Opencv Video stabilization
  • Opencv car dataset download
  • Opencv tutorial Transparent mask
  • Opencv videowriter
  • Opencv FFMPEG
  • Opencv Canny edge and hough lines
  • Combine two overlapping frame by simple correlation

    Share this for more tutorials and computer vision post from me.. Thanks best Vladimir
    Merge Two mat video frames

    Today morning, I read this post combine two overlapping video frame on stackoverflow. 
    Problem is to merge 2 images together or two video stream. I am thinking about the most simple not so much dummy solution. I find really simple solution that can be extend to more complex one. 

    First of all, I took 2 images from phone like on picture (color images). Program selects Rectangles region from both source images and resize end extract this roi rectangles. Idea is find the "best" overlapping Rect regions by normalized correlation and combine this images on place with maximal correspondence.

    Merge Two mat video frames

    Merge Two mat video frames

    I know well solution by SIFT and SURF, but it is little bit laborious than this.

    It is not the best solution but one of the most simple. If your cameras are fixed in stable position between themselves. This is good solution i thing. I hold my phone in hand :)

    You can also use this simple approach on video. The speed depends only on number of rectangle candidate you want to compare.

    You can improve this by smart region to compare selection.

    overlapping regions by optical flow

    Also, I am thinking about idea to use optical flow by putting images from camera at same time to sequence behind each other. From the possible overlapping regions in one images extract good features to track and find them in region of second images.

    matchTemplate Example

    #include <Windows.h>
    #include <fstream>
    #include <iostream>
    #include "opencv2\highgui.hpp"
    #include "opencv2\imgproc.hpp"
    #include "opencv2/imgcodecs/imgcodecs.hpp"
    #include "opencv2/videoio/videoio.hpp"

    using namespace cv;

    using namespace std;

    int main(int argc, const char** argv)


    Mat OneCamInput;

    Mat SecondCamInput;

    // load and resize source images of video

    OneCamInput = imread("1.JPG");

    SecondCamInput = imread("2.JPG");

    resize(OneCamInput, OneCamInput, Size(800, 600));

    resize(SecondCamInput,SecondCamInput, Size(800, 600));

    // Show Both imput images

    imshow("input1", OneCamInput);


    imshow("input2", SecondCamInput);


    //Convert both to gray images

    cvtColor(OneCamInput, OneCamInput, COLOR_BGR2GRAY);

    cvtColor(SecondCamInput, SecondCamInput, COLOR_BGR2GRAY);

    // Prepare Mat for MatchTemplates

    Mat res(1, 1, CV_32F);

    // Prepare values for max correspondance

    float resMax = 0;

    Rect RectOver1;

    Rect RectOver2;

    int iRes;

    // For loop over over compared rectangles with different size

    for (int i = 20; i <= OneCamInput.cols / 4; i = i + 1) {

          // Extract rectangles from both source images

           Mat M1 = OneCamInput(Rect(OneCamInput.cols - i, 0, i, OneCamInput.rows));

           Mat M2 = SecondCamInput(Rect(0 , 0, i, SecondCamInput.rows));

           imshow("Overlap Rect1", M1);


           imshow("Overlap Rect2", M2);


          // Match how similar selected rectangles are

           matchTemplate(M1, M2, res, TM_CCOEFF_NORMED);

             // Convert Mat res(1, 1, CV_32F); to flow

             float resFloat = ((float *)([0];

    //Save max correspondence

           if (resFloat >= resMax) {

                 resMax = resFloat;

                 cout << res << endl;

                 iRes = i;

                 RectOver1 = Rect(OneCamInput.cols - i, 0, i, OneCamInput.rows);

                 RectOver2 = Rect(0, 0, i, SecondCamInput.rows);



    Mat HM;

    // Crop original images defined by border of max correspondence

    Mat On1Res = OneCamInput(Rect(0, 0, OneCamInput.cols -iRes -RectOver1.width/6 , OneCamInput.rows));

    Mat On2Res = SecondCamInput(Rect(0, 0, SecondCamInput.cols - iRes , OneCamInput.rows));

    //Horizontal merge of this two images together

    hconcat(On1Res, On2Res, HM);

    imshow("Result", HM);


    imwrite("Result.jpg", HM);


    Opencv Tutorial video writer (VideoWriter)

    Share this for more tutorials and computer vision post from me.. Thanks best Vladimir
    Tutorial VideoWriter Cut video in Real time

    In this tutorial, I would like to show you how to use the video writer in opencv. This is little bit boring task, but there is some insights to be highlighted for successful use. 

    Let me introduce my Setup

    Openv Visual Studio 2015

    I am working with Visual Studio 2015 and Opencv 3. How to install opencv and build your own program you can find here. Install Opencv Visual Studio 2015
    Opencv 3 prebuild lib is shipped with VC11 and VC12 libs. This libs are built for Visual Studio 2012 and 13. You can use them if you have older version of Visual Studio and include the project. If you can use Visual Studio 2015 with this libs may be there is some problem with compatibility. This issue could be solved by redistribution windows pack with older dll.
     I definitely prefer build your Opencv libs for appropriate Visual Studio Version by Cmake. It is easy straightforward process which assumes only basic knowledge of opencv dependency, Cmake and Visual Studio basics.

    Buil your own Opencv 3 for Visual Studio 2015 !! 

    This process takes about one hour of your time, but save a lot of time of possible future troubles. Believe me I went through it. 

    Opencv 3 Video editing and writer

    I would like add something strange and little bit different in this tutorial. Let me show you code that is simple and contain 

    Real time video cutting 

    This code show you how to use Opencv as simple real-time video editing tool. The program opens video source as file, rtsp stream or web camera. 

    Only small recapitulation

    Select your VideoCapture source. 

    • This source let you access your web camera 
    VideoCapture capture(0);
    • This source let you access rtsp stream if you have installed FFMPEG
    VideoCapture capture("rtsp://");
    • This video capture format let you access file in project directory
    VideoCapture capture("input.mp4");

    Video editing cutting on the fly

    Back to the example. Just choose your video source depends on you. 
    This program let you catch some keyboard events and record video frames only if this events occurred. In other words, The program write frames between two keyboard inputs into external video file. 
    You can watch the long video and cut only interesting parts on the fly or press button if something happened on the RTSP security camera stream. 

    Opencv Video cut, record code example

    This program works on windows using GetKeyState. Let me use same sizes input and format for simplicity. In some future tutorial, I will show more about this broad topic. 

    Record video if left Shitf key changes state

    #include <Windows.h>
    #include <fstream>
    #include <iostream>
    #include "opencv2\highgui.hpp"
    #include "opencv2\imgproc.hpp"
    #include "opencv2/imgcodecs/imgcodecs.hpp"
    #include "opencv2/videoio/videoio.hpp"

    using namespace cv;
    using namespace std;

    int main(int argc, const char** argv)

    Mat LoadedImage;
            // Video capture from file  opt.MOV in project directory
    VideoCapture cap("opt.MOV");

    // This is one of the most important thing
            // Sizes
            //Your VideoWriter Size must correspond with input video.

             // Size of your output video
    Size SizeOfFrame = cv::Size( 800, 600);

             // On windows write video into Result.wmv with codec W M V 2 at 30 FPS 
             // and use your predefined Size for siplicity 

    VideoWriter video("Result.wmv", CV_FOURCC('W', 'M', 'V', '2'), 30, SizeOfFrame, true);

    for (;;)

    bool Is = cap.grab();
    if (Is == false) {

    cout << "cannot grab video frame" << endl;

    else {

                            // Receive video from your source 
    cap.retrieve(LoadedImage, CV_CAP_OPENNI_BGR_IMAGE);

                            // Resize your video to your VideoWriter size
                            // Again sizes must correspond 
    resize(LoadedImage, LoadedImage, Size(800, 600));

                            // Preview video all frames
    namedWindow("Video", WINDOW_AUTOSIZE);
    imshow("Video", LoadedImage);
    // check of left shift key change its state
    // if Left Shift is pressed write video to file
    if (GetKeyState(VK_LSHIFT) == true )

    cout << "Saving video" << endl;
                                     // Save video into file if  GetKeyState(VK_LSHIFT)  state changes

    }else { 
                                    // else nothing to write  only show preview
    cout << "Only Frame preview" << endl;



    Share this for more tutorials and computer vision post from me.. Thanks best Vladimir


    The Marr Prize

    The Marr Prize is a prestigious award in computer vision given by the committee of the international conference on Computer Vision. The prize is called after David Courtnay Marr (19 January 1945 – 17 November 1980). His research was focused in neuropsychology, psychology, artificial intelligence. He successfully integrate these topics into visual processing.

    It is considered as one of the top honors for computer vision scientist. Let me recapitulate the best of awarded papers.

    best of awarded papers 

    • 1987 Marr Prize Paper: David Heeger, Optical Flow using Spatiotemporal Filters

    • 1993 Charles A. Rothwell, David A. ForsythAndrew Zisserman, and Joseph L. Mundy, Extracting Projective Structure from Single Perspective Views of 3D Point Sets

    The Marr Prize 2015

    The winner of this year was announced at the International Conference on Computer Vision in Santiago de Chile. December 11-18 2015 iccv 2015 siteThe awarded paper is cooperative work of Microsoft Research (Cambridge UK), Carnegie Mellon University, Fondazione Bruno Kessler (Trento Italy).

    Deep Neural Decision Forests

    By P. Kontschieder, M. Fiterau, A. Criminisi, and S. Rota Bulo

    • The paper present Deep Neural Decision Forrest. Classification trees connected to functionality and behavior of deep convolutional neural networks.

    Available here Microsoft Research

    ICCV 2015 Tutorials


    There is few resources help you understand how to remove camera blur. Everybody has some failed photo. This course introduced technique that let you save your photos.
    There is some good presentation materials. removingmotion blur

    ICCV 2015 TUTORIAL The Art of solving Minimal Problems

    This is page of our school i guess. As a Main speakers are here Tomas Pajdla from CTU in Prague and Zutana Kukelova, From Microsoft Research Cambridge (CTU in Prague).
    There is lots of nice resources focus on minimal problem in various computer vision applications. minimal-iccv-2015

    ICCV 2015 TUTORIAL on Tools for Efficient Object Detection

    This discussed topics related to fast object detectors. Unfortunately, There are no materials in this tutorials publically available. You can go through the related topics and names of the Organizers.
    There are well known names like Rodrigo Benenson from Max Planck Institute and Piotr Dollar Facebook AI Research. Object Detection ICCV2015

    ICCV 2015 TUTORIAL The Mathematics Of Deep Learning

    There is some materials focus on optimal deep learning training, Scattering Convolutional Network, and stability of large Deep Networks.
    The materials are available under TENTATIVE SCHEDULE Here

    ICCV 2015 Workshops with good materials

    Describing and Understanding Video

    This is big challenge of today. Effective indexing and searching of video content. Link Here

    Object Understanding for Interaction

    There is lots of good materials related to Pose estimation, motion understanding, objects interactions, hand gesture recognition and more  Here.

    Human-Like Intelligent Vision Processing


    The CEVA-XM4 is processor that support programmable vector architecture, fixed and float point math units and vision oriented low-power instruction set. The CEVA-XM4 advanced vector architecture that is 33% faster in image processing than GPU units. Main advantages is targeting the segment of mobile devices and IoT devices. This is evident in other benefits like small size, extremely small consuming of energy.

    CEVA XM4 architecture
    CEVA-XM4 processor focus on Real-Time depth mapping and point cloud applications, computational photography for image enhancement algorithms and deep learning like convolutional neural network (CNN) for object detection, image recognition and context-aware algorithm. 

    CEVA XM4 target market

    CEVA-XM4 market

    This processor focus on wide area of application in fields like smart devices, Automotive, Security & surveillance camera and wearables devices like glasses, sport cameras, drons and robots. 

    CEVA XM4 impressive results

    • Real time computer vision apps on HD and 4k video streams 
    • Depth and 3d augmented reality apps 
    • Multi functional computer vision apps combining gesture, face recognition, emotion recognition, eye tracking applications in super resolution with multiple frames processed at once. 
    • Deep learning recognition and detection apps with deep neural network optimized with fixed point low-power architecture.

    CEVA XM4 

     processor key features include:

    • · Fully programmable in high level languages
    • · Scalar and Vector units to handle a mix of control and parallel code efficiently
    • · Very Long Instruction Word (VLIW) and Single Instruction Multiple Data (SIMD) functionality
    • · Full memory sub-system for easy integration into SOCs, utilize multi-core and hardware accelerator connectivity using standard interfaces
    • · Automated traffic management from the system into local memories to best performance and power efficiency
    • · Flexible precision: A combination of mixing efficient fixed-point and floating-point math

    CEVA-XM4 complete development platform

    •    Extensive vision libraries accessible directly from the CPU for offloading the computer vision functions to achieve energy saving
    • · Robust software development tools and a software development framework
    • · Hardware development platform
    • · Product level software applications developed by CEVA, including Digital Video Stabilizer and Super Resolution

    Source Ceva

    IOT computer vision

    Computer vision IOT company raises $5M from Swiss fund ABB Technology Ventures, EcoMachines Ventures in London, and Silicon Valley-based Flex Lab IX.

    This home automation company develop application for daily usage.


     Features of Pointgrab Intelligent Optical Analytics

     • People counting, Location and Tracking
     • Accurate and reliable detection of human presence
     • Screening out regions of no interest (using digital masking) Light sensing
     • Highly accurate average Lux reading (globally and in selective zones)
     • Color temperature measurement

    This company is trying to focus on 3 major segments of today. Frist is internet of things, second big data and machine learning.

     Great idea of Pointgrab  project

    Save energy:
    This could be based on detection of lighted lights and presence of people in room. In case, People are no longer in room algorithm could make a decision to switch the lights off. This technology can also make a decision like, You are going sit into a chair lets switch main lighting of the room off and turn on small lamp close to you.
    This is some of great example how this technology could save the many.

    The only problem that I see here is false detections that could switch light on or call security agency in wrong time. The algorithm could works perfectly for 99 percentage time of the day but wrong alerts are problem in many situations.

    Another advantages is image processing on sensor level. Tracking and location on sensor level could be anonymous enough for most of us.

    Point touch

    Another great product is Point Touch, which is gesture controls for customer devices. Experience in this area say that the mouse and keyboards are more comfortable than waving hands. But in some situations this could be supplement of standard interface for example in smart panel in shopping centers, where the motivation is simple. Someone does not want to touch what everyone else.


    Computer vision tracking in augmented reality

    Open Hybrid computer vision

    Open Hybrid is a platform that allows you interact with physical world. It combines physical objects with augmented user interface. This platform allows to designers create an Augmented reality apps with simple HTML interface. Benefits is that programmers of the apps only using interface and no 3D or computer vision knowledge is needed. You can add functionality to the physical world object and deploy your program on platforms like Arduino.

    Open Hybrid augmented reality controller 

    For example, you can add a virtual slider for the physical devices controlled by Arduino or another platform.
    Can you imagine usage of this framework in daily live?
    Brilliant idea, It could be simple interface for everything you want. Do you want control heater? Lets put marker on devices you want to control and add the augmented reality control heater panel. Simple slider let you control your physical devices.  Also you can add another data sources to your project and interact with object based on actual temperature and so on.

    Problem of augmented reality

    What is main problem of this solution? Actually, why should you control object like this if you can control directly by web interface in common web application. The devices is still connected with physical object. Everything is same but in augmented reality you can control physical world by virtual slicer. This is really cool! But the physical devices can be controlled directly by normal slider on mobile app. There is only one difference. Virtual slider appear on marker and slider in phone appear when you start the app. I am pretty sure that the marker in simple app works perfectly.
    Is there any advantages. S My first thought is about location. You are a manager of large warehouse and you have lots of things to control and authenticate. But still there is a marker. You can simply start controller of your object in simple web app by QR code instead augmented reality app.
    Maybe the advantages is about connecting many physical devices into more complex control platforms and visualize the relations between them. Another advantages is control of moving things, where the visual overview is needed. 

    Another info on project  link

    Computer vision cameras

    This is what smart cars really needs. 
    More information in Nature photonics [1]

    Detection and tracking of moving objects hidden from view

    The ability to detect motion and track a moving object hidden around a corner or behind a wall provides a crucial advantage when physically going around the obstacle is impossible or dangerous. Previous methods have demonstrated that it is possible to reconstruct the shape of an object hidden from view. However, these methods do not enable the tracking of movement in real time. We demonstrate a compact non-line-of-sight laser ranging technology that relies on the ability to send light around an obstacle using a scattering floor and then detect the return signal from a hidden object within only a few seconds of acquisition time. By detecting this signal with a single-photon avalanche diode (SPAD) camera, we follow the movement of an object located a metre away from the camera with centimetre precision. We discuss the possibility of applying this technology to a variety of real-life situations in the near future.

    [1] Gariepy, Genevieve, Tonolini, Francesco, Henderson, Robert, Leach, Jonathan, Faccio, Daniele, " Detection and tracking of moving objects hidden from view", Nat Photon, 2015/12/07/online,  advance online publication, Nature Publishing Group
    Computer Vision corner camera

    Hololens in space

    Hololens in space
    Nasa testing hololens at Jet propulsion laboratory in Pasadena, California. This is not only social interaction for astronauts. Nasa hope that Hololens could help with remote communication with a experts on earth. This could bring not only voice and image support, but also detail plans and schedule of operations during challenging experiments directli on Holo Screen. Nasa also working on apps like augmented reality for inventory management. How and where to find thinks on space station? Great idea for small space, full of thinks like that.

    ImageNet challenge Results

    Microsoft Team wins several category of ImageNet challenge. They are using extremely deep neural networks. This results are achieved by depth of over 152 layers. 
    • Object detection with provided training data
    • Classification+localization with provided training data

    Microsoft COCO Dataset

    Microsoft COCO is a new image recognition, segmentation, and captioning dataset for common objects in context.

    Aggregate channel features ACF

    I am testing Aggregated Channel Features ACF [1] for mode than 6 months.  Features are extracted from 10 channels, L LUV, U LUV, V LUV, 0° Grad, 30° Grad, 60° Grad, 90°Grad, 120° Grad, 150° Grad, and Mag Gradient.
    This 10 channels are relatively simple to extract mainly by sobel derivatives (cv::sobel).
    Yes, the implementation is mainly based on Opencv.

    I am using this model param int modelRows = 88;
                                            int modelCols =modelRows/4;

    Init features are randomly generated. This is funny part. In some cases program hold 5GB is Ram memory and can cause shutdown of my PC. I cant find why. :)

     Now, I am trying different approach of weak feature selectors by AdaBoost, GentleBoost, WaldBoost .

    This is only personal research.
    In our applications.
    Waldboost on Haar + LBP  features + Kalman are fast and good enough for me.

    I know that Opencv Contrib module has ICF ACF feature extractor and Detector learned by waldboost. After some testing i try to implement my own version and this is the first results.

    Adaboost ACF learning details

    • I am using Adaboost 1000 weak classifier, 
    • Ped dataset is my own. 10 000 positive samples. I am working on this dataset time to time for more than 2 years.
    • 8 000 Neg samples are generated randomly from my Travel pictures :) 
    Yes first results after 6 months. 

    Aggregate channel features first results

    In this example is used Town Center Dataset only for demonstration. I can not find any condition of usage on this page. Town Centre Dataset

     [1] P.Dollar, R.Appel, S.Belongie and P.Perona. "Fast feature pyramids for object detection". TPAMI, 2014. 1, 2, 7 

    Sliding Window, search objects single scale

    Opencv C++ tutorial about the object detection with sliding window. Sliding window is easy to implement in single scale and also not to much harder  to implement in multi scale for example detection inside the bigger mat. I would like to visualize all the step during the code and described by natural c++ way. As a // comments. Enjoy the coding.. 

    First tutorial about mat resizeing is Mat Resize
    Second tutorial mat roi Roi

    Opencv instalation for the tutorial

    You can simple prepare the project inside the Visual Studio 2015 by Nuget Packages. This approach is easy for beginers and better than standard installation with all the environmental variables problems. Just follow the installation steps inside here
    I am using Visual Studio 2015,  How to use Opencv 3.0.0 with Visual Studio can be found here install opencv visual studio 2015.

      Sliding window for detection opencv code

    #include <Windows.h>

    #include "opencv2\highgui.hpp"
    #include "opencv2\imgproc.hpp"

    #include "opencv2/video/background_segm.hpp"
    #include "opencv2/video/tracking.hpp"

    using namespace cv;
    using namespace std;

    int main(int argc, const char** argv)
    //  Load the image from file
    Mat LoadedImage;
    // Just loaded image Lenna.png from project dir to LoadedImage Mat
    LoadedImage = imread("Lenna.png", IMREAD_COLOR);
    //I would like to visualize Mat step by step to see the result immediately.

    // Show what is in the Mat after load
    namedWindow("Step 1 image loaded", WINDOW_AUTOSIZE);
    imshow("Step 1 image loaded", LoadedImage);
    Opencv Tutorial Sliding Window

    // Same the result from LoadedImage to Step1.JPG
    imwrite("Step1.JPG", LoadedImage);

    // Parameters of your slideing window

    int windows_n_rows = 60;
    int windows_n_cols = 60;
             // Step of each window
            int StepSlide = 30;

    // Just copy of Loaded image
    // Note that Mat img = LoadedImage; This syntax only put reference on LoadedImage
    // Whot does it mean ? if you change img, LoadeImage is changed 2. 
    // IF you want to make a copy, and do not change the source image- Use clone();
    Mat DrawResultGrid= LoadedImage.clone();

                    // Cycle row step
    for (int row = 0; row <= LoadedImage.rows - windows_n_rows; row += StepSlide)
         // Cycle col step
    for (int col = 0; col <= LoadedImage.cols - windows_n_cols; col += StepSlide)
    // There could be feature evaluator  over Windows

    // resulting window
    Rect windows(col, row, windows_n_rows, windows_n_cols);

    Mat DrawResultHere = LoadedImage.clone();

    // Draw only rectangle
    rectangle(DrawResultHere, windows, Scalar(255), 1, 8, 0);
    // Draw grid
    rectangle(DrawResultGrid, windows, Scalar(255), 1, 8, 0);

    // Show  rectangle
    namedWindow("Step 2 draw Rectangle", WINDOW_AUTOSIZE);
    imshow("Step 2 draw Rectangle", DrawResultHere);
    imwrite("Step2.JPG", DrawResultHere);

    Opencv Tutorial Sliding Window

    // Show grid
    namedWindow("Step 3 Show Grid", WINDOW_AUTOSIZE);
    imshow("Step 3 Show Grid", DrawResultGrid);
    imwrite("Step3.JPG", DrawResultGrid);

    Opencv Tutorial Sliding Window

    // Select windows roi
    Mat Roi = LoadedImage(windows);

    //Show ROI
    namedWindow("Step 4 Draw selected Roi", WINDOW_AUTOSIZE);
    imshow("Step 4 Draw selected Roi", Roi);
    imwrite("Step4.JPG", Roi);

    Opencv Tutorial Sliding Window



    Opencv ROI, Region of Interest

    Simple opencv C++ tutorial how to work with ROI. Code example about selecting the rectangle region of interest inside the image and cut or display part of the image from the bigger picture. There is nothing what is difficult about this. Only trick is about one line of code. 

    Rect RectangleToSelect(x,y,width,height)
    Mat source;
    Mat roiImage = source(RectangleToSelect);

    This is first post from this series. This simple opencv tutorials are all over the web. I would like to visualize all my steps through the code and //comment them. Each tutorial will contain small amount of step to keep reader focused.  First tutorial about mat resizeing is available under that link Mat Resize

    I am using Visual Studio 2015,  How to use Opencv 3.0.0 with Visual Studio can be found here install opencv visual studio 2015. In Visual studio 2015 is best options to use NUGET packages, Here is described how to install Opencv by NUGET. It is easy. Working under one minute after you find the NUGET packages console.. 

    Opencv select mat ROI tutorial example


    #include <Windows.h>
    #include "opencv2\highgui.hpp"
    #include "opencv2\imgproc.hpp"
    #include "opencv2/video/background_segm.hpp"
    #include "opencv2/video/tracking.hpp"

    using namespace cv;
    using namespace std;

    int main(int argc, const char** argv)
    //  Load the image from file
    Mat LoadedImage;
    // Just loaded image Lenna.png from project dir to LoadedImage Mat
    LoadedImage = imread("Lenna.png", IMREAD_COLOR);
    //I would like to visualize Mat step by step to see the result immediately.
    // Show what is in the Mat after load
    namedWindow("Step 1 image loaded", WINDOW_AUTOSIZE);
    imshow("Step 1 image loaded", LoadedImage);
    // Same the result from LoadedImage to Step1.JPG
    imwrite("Step1.JPG", LoadedImage);

    Opencv ROI

    // This construct Rectangle Rec start at x=100 y=100, width=200 and heigth=200
    Rect Rec(100, 100, 200, 200);

    //Draw the rectangle into LoadedImage
    //Parameters are (into Mat, Rec describe position where to draw rectangle
    // Scalar is Color, 1 is thickness, 8 is line type and 0 shift position
    rectangle(LoadedImage, Rec, Scalar(255), 1, 8, 0);

    // Show what rectangle
    namedWindow("Step 2 draw Rectangle", WINDOW_AUTOSIZE);
    imshow("Step 2 draw Rectangle", LoadedImage);

    Opencv ROI

    // Same the result from LoadedImage to Step2.JPG
    imwrite("Step2.JPG", LoadedImage);

    //Select area described by REC and result write to the Roi
    Mat Roi = LoadedImage(Rec);
    namedWindow("Step 3 Draw selected Roi", WINDOW_AUTOSIZE);
    imshow("Step 3 Draw selected Roi", Roi);
    // Same the result from LoadedImage to Step3.JPG
    imwrite("Step3.JPG", Roi);

    Opencv ROI

    // Put roi back into source image
    // If you want to show the detail and 
    // visualize with context of source image

    // This rectangle describe target, where you want to
    // put your roi into original image
    // !! width and height of where rect must match Roi size
    // Let put our Roi into origin
    Rect WhereRec(0, 0, Roi.cols, Roi.rows);
    // This copy Roi Image into loaded on position Where rec

    namedWindow("Step 4  Final result", WINDOW_AUTOSIZE);
    imshow("Step 4 Final result", LoadedImage);

    Opencv ROI

    // Same the result from LoadedImage to Step4.JPG
    imwrite("Step4.JPG", LoadedImage);


    Opencv Mat Resize 

     Resize the Mat or Image in Opencv C++ tutorial. It is obviously simple tast and important to learn. This tutorial is visualize step by step and well described each of them. The main trick is in that simple code.

    Mat Input;
    Mat Resized;
    int ColumnOfNewImage = 60;
    int RowsOfNewImage = 60;
    resize(Input, Resized, Size(ColumnOfNewImage,RowsOfNewImage));

    This code just take a Input image and resized save to output Mat. How big is the resized image is based on the Size. Size just contain two parameters. Simple numbers how the result should be big. Simple number of column (width) and rows (height). That is basically it. Enjoy

                                                    Boring same face again and again. 
    Opencv Mat Tutorial

    Load Image, resize and save Opencv C++

    #include <Windows.h>
    #include "opencv2\highgui.hpp"
    #include "opencv2\imgproc.hpp"
    #include "opencv2\video\background_segm.hpp"
    #include "opencv2\video\tracking.hpp"

    using namespace cv;
    using namespace std;

    int main(int argc, const char** argv)
    //  Load the image from file
    Mat LoadedImage;
    // Just loaded image Lenna.png from project dir to LoadedImage Mat
    LoadedImage = imread("Lenna.png", IMREAD_COLOR);
    //I would like to visualize Mat step by step to see the result immediately.
    // Show what is in the Mat after load
    namedWindow("Step 1 image loaded", WINDOW_AUTOSIZE);
    imshow("Step 1 image loaded", LoadedImage);
    // Same the result from LoadedImage to Step1.JPG
    imwrite("Step1.JPG", LoadedImage);
           // Saved Image looks like original :)
    Opencv Mat tutorial

    // You can load colored image directly as gray scale
    LoadedImage = imread("Lenna.png", CV_LOAD_IMAGE_GRAYSCALE);
    // Show what is in the Mat after load
    namedWindow("Step 2 gray image loaded", WINDOW_AUTOSIZE);
    imshow("Step 2 gray image loaded", LoadedImage);
            // Show the result for the longer time. 
            // If you want to see video frames in high rates in the loop jist put here waitKey(20). 
    Opencv Mat tutorial

    // Same the result from LoadedImage to Step2.JPG
    imwrite("Step2.JPG", LoadedImage);
     //  Basic resize and rescale 
    // Resize LoadedImage and save the result to same Mat loaded Image.
    // You can also resize( loadedImage, Result, ..... )

    // Load again source images
    LoadedImage = imread("Lenna.png", IMREAD_COLOR);
     //You can resize to any size you want Size(width,heigth)
    resize(LoadedImage, LoadedImage, Size(100, 100));
    // Vizualization
    namedWindow("Step 3 image resize", WINDOW_AUTOSIZE);
    imshow("Step 3 image resize", LoadedImage);

     // Yes it is smaller than source. 100x100 image
    Opencv Mat Resize
     //Save above image to Step3.jpg 
    imwrite("Step3.JPG", LoadedImage);
    LoadedImage = imread("Lenna.png", IMREAD_COLOR);

    // Better is resize based on ratio of width and heigth
    // Width and heigth are 2 times smaller than original source image
    // result will be saved into same mat. If you are confused by this. 
    // You can try to modify the code and add MAT outputImage and dysplay it. 
     //!! cols number of collumn of the image mat. and rows are rows
    // cols and rows are same ase width and heigth
    resize(LoadedImage, LoadedImage, Size(LoadedImage.cols/2, LoadedImage.rows/2));
    // Vizualization
    namedWindow("Step 4 image resize better", WINDOW_AUTOSIZE);
    imshow("Step 4 image resize better", LoadedImage);
    Opencv Mat Resize
    //Yes it is 2 times smaller then source
    // Save
    imwrite("Step4.JPG", LoadedImage);
     //All the steps are saved in Step1 Step
    See you soon

    Computer Vision for Visual Effects

    This is great series of lecture about computer vision and movie tricks by Rich Radke.

    The First lecture is a sort of introduction to the world of movie magic and tricks related to computer vision technique. Look at this really good overview for your own motivation. The film industry is engaged in video post-processing for a long time. You can find here topics from segmentation to tracking of complex model and many greater practical computer vision problems.

    Some of them are waiting to right time to be part of our lives in mobile devices and not only prerogative in movie industry.

    More Video Lectures Here.

    I find a great inspiration in his book 2.

    Computer Vision FX more about book Here

    Facial landmark detector

    Flandmark detection

    This is another great result of Czech school of computer vision. Facial landmark detector was developed at Czech Technical University in Prague, Center for Machine Perception mainly by Michal Uřičák and Vojtěch Franc. 
    They are both work for Eydea on commertial products if I remember correctly.

    This is me on the video using opencv 3 under Visual Studio 2015 with Clandmark Lib. I have no use in mine products but its pretty cool playing with this in free time. This is great cross platform lib for Iphone Android Applications and also for Windows 10 Universal Apps.

    Look at a video at the original project page. I experimented a bit with floating point precision in a solver. And the results are a little bit worse. :)


    Here is links to Original web page of the projects with broader description.


    Flandmark is an open source C library (with interface to MATLAB) implementing a facial landmark detector in static images.


    CLandmark is an open-source facial landmark lib. Written in C++. The next great generation of Flandmark.
    References are taken from both projects site.

    [1] M. Uricar, V. Franc, D. Thomas, A. Sugimoto, and V. Hlavac, Real-time Multi-view Facial Landmark Detector Learned by the Structured Output SVM, BWILD '15: International Workshop on Biometrics in the Wild, 2015.

    [2] M. Uricar, V. Franc and V. Hlavac, Detector of Facial Landmarks Learned by the Structured Output SVM, VISAPP '12: Proceedings of the 7th International Conference on Computer Vision Theory and Applications, 2012. Received Best Paper Award 

    [3] M. Uricar, Detector of facial landmarks, Master's Thesis, supervised by V. Franc, May 2011. [pdf]
    J. Sivic, M. Everingham and A. Zisserman, "Who are you?" - Learning Person Specific Classifiers from Video, Proc. of IEEE Conference on Computer Vision and Pattern Recognition, 2009. 

    [4] G. B. Huang, M. Ramesh, T. Berg and E. Learned-Miller, Labeled faces in the wild: A database for studying face recognition in unconstrained environments, Technical Report 07-49. University of Massachusetts, Amherst, 2007. 
    clever advertising panel face detection

    Smart billboards with computer vision

    Smart billboards like big TV panels in supermarkets is not really smart. This simple example demonstrate this sequence. Simple say hello to potential customer, when the panel see incoming people. Just start some sequence to reach more customers by interactive messages. What is cool on it?  You can measure number of people you see and calculate the ratio of people that touch the panel.  Based on this information panel can automatically chose the best one from the database. 

    Opencv Portable Native Client 

    I wrote this simple application in C++ using the Pepper API communicate directly with  javascript. It is some kind of sandbox running program under the Chromium (Google chrome browser). This mean that the program running on client side. That's a really cool. The C++ app on client side distributed over internet.  This technology is mainly used to port old games and  demanding applications to the web. 

     Interactive advertising and responsible design

    The best thing is connecting more challenging computer vision applications than mine with real world and design simple HTML interface of your smart panel. 

    Main advantage is 
    1. Distribution, distribution, distribution of this apps 
    2.  Updates. 
    3. Cross platform of C++ code. 
    4. Simple HTML interface for designers


    Follow by Email

    Powered by Blogger.