Skip to main content

Featured Post

OpenCV 4.5 simple optical flow GPU tutorial cuda::FarnebackOpticalFlow

This OpenCV tutorial is a very simple code example of GPU Cuda optical flow in OpenCV written in c++. The configuration of the project, code, and explanation are included for farneback Optical Flow method. Farneback algorithm is a dense method that is used to process all the pixels in the given image. The dense methods are slower but more accurate as all the pixels of the image are processed. In the following example, I am displaying just a few pixes based on a grid. I am not displaying all the pixes. In the opposite to dense method the sparse method like Lucas Kanade using just a selected subset of pixels. They are faster. Both methods have specific applications. Lucas-Kanade is widely used in tracking. The farneback can be used for the analysis of more complex movement in image scene and furder segmentation based on these changes. As dense methods are slightly slower, the GPU and Cuda implementation can lead to great performance improvements to calculate optical flow for all pixels o

Building OpenCV 4.x, with FFmpeg on Ubuntu 22.04

 

I compiled OpenCV from source code many times on different platforms and configurations. I recently tried to compile OpenCV on ubuntu with FFmpeg. So, I compile FFmpeg from the source. I tried to compile OpenCV from the source, linked to FFmpeg and failed. The short message is to use apt-get install for FFmpeg’s avlibsxxx instead of compiling from the source. Why do I get to this point? I need OpenCV lib compiled with FFmpeg for the application to read the video stream. Additionally, the final App needs Opencv with this setup as well as a custom FFmpeg build to create a streaming output of my processed video. So I need two different FFmpegs libs in the whole process. I will tell you more details in the following lines. So the goal is on the following picture. The App is linking Opencv libs( compiled with FFmpeg) and FFmpeg libs.

App linking Opencv and FFmpeg 

More Motivation

Opencv can read video from many sources, using FFMPEG, DSHOW, MSMF, GStreamer, and other reader implementations. So in other words, The OpenCV Videocapture is capable of reading video from files and video streams. On the other hand, OpenCV VideoWriter is able to save a video in files but limited to creating a video stream. VideoWriter is able to stream video out of OpenCV by GStreamer Pipeline when GStreamer needs to be linked with OpenCV and runtime GStreamer installed on your system. I described more in the tutorial OpenCV to NGINX by RTMP and further to web in HLS here. I also published a compilation tutorial for OpenCV and GStreamer on Windows Visual Studio 2022 here

So the goal is to write an application that reads video streams by Videocapture. The app processes the captured video stream. I will write an alternative VideoWriter based on FFmpeg to stream the video output using H265 further. 

Compile FFmpeg from the source 

This FFmpeg will not be used to build OpenCV. This FFmpeg will be linked with applications that use Opencv and FFmpeg to create custom video streaming output of processed video. The full compilation guide is quite nice. Here  

  1. Get dependencies 
sudo apt-get update -qq && sudo apt-get -y install \
  autoconf \
  automake \
  build-essential \
  cmake \
  git-core \
  libass-dev \
  libfreetype6-dev \
  libgnutls28-dev \
  libmp3lame-dev \
  libsdl2-dev \
  libtool \
  libva-dev \
  libvdpau-dev \
  libvorbis-dev \
  libxcb1-dev \
  libxcb-shm0-dev \
  libxcb-xfixes0-dev \
  meson \
  ninja-build \
  pkg-config \
  texinfo \
  wget \
  yasm \
  zlib1g-dev \
  nasm \
  libx264-dev \
  libx265-dev \
  libnuma-dev \
  libvpx-dev \
  libfdk-aac-dev \
  libopus-dev \
  libdav1d-dev

2) Get FFmpeg, configure, compile and install at once

sudo -u user sh -c ' mkdir ~/ffmpeg_sources && cd ~/ffmpeg_sources && \
wget -O ffmpeg-snapshot.tar.bz2 https://ffmpeg.org/releases/ffmpeg-snapshot.tar.bz2 && \
tar xjvf ffmpeg-snapshot.tar.bz2 && \
cd ffmpeg && \
PATH="$HOME/bin:$PATH" PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig" ./configure \
  --prefix="$HOME/ffmpeg_build" \
  --pkg-config-flags="--static" \
  --extra-cflags="-I$HOME/ffmpeg_build/include" \
  --extra-ldflags="-L$HOME/ffmpeg_build/lib" \
  --extra-libs="-lpthread" \
  --ld="g++" \
  --enable-shared \
  --bindir="$HOME/bin" \
  --enable-gpl \
  --enable-gnutls \
  --enable-libass \
  --enable-libfdk-aac \
  --enable-libfreetype \
  --enable-libmp3lame \
  --enable-libopus \
  --enable-libdav1d \
  --enable-libvorbis \
  --enable-libvpx \
  --enable-libx264 \
  --enable-libx265 \
  --enable-nonfree && \
PATH="$HOME/bin:$PATH" make && \
make install && \
hash -r '

Build Opencv 4.x with FFmpeg Ubuntu 22.04

Now, I would like to build OpenCV with different (FFmpeg). The reason is stupid and simple. The previous FFmpeg newest version is not compatible with OpenCV 4.X Cmake (not sure why). Luckily, I do not care in my case. 

  1. Install FFmpeg dependencies (libavformat-dev, code, format, util, swscale are installed by following the apt install command)
sudo apt install cmake libtbb2 g++ wget unzip ffmpeg libgtk2.0-dev libavformat-dev
libavcodec-dev libavutil-dev libswscale-dev libtbb-dev libjpeg-dev libpng-dev
libtiff-dev

2). Download, configure, and build OpenCV 4.X

Take the following commands one by one. 

sudo wget -O opencv.zip https://github.com/opencv/opencv/archive/4.x.zip
sudo unzip opencv.zip
sudo mkdir -p build && cd build
sudo cmake -DHAVE_FFMPEG=ON ../opencv-4.x

The result of the configuration CMake command above should list FFMPEG in your configuration before you will build OpenCV from the source. Your configuration before build should list the following values. 

Opencv configuration with FFmpeg

The next step is to compile in install Opencv by following. 

sudo cmake --build .
sudo make install

Cool, Done. So now, I have compiled and installed OpenCV with FFmpeg, and in different newest FFmpeg installations available in different directories. 

Create Opencv APP and Link FFmpeg

So now, My new OpenCV application for video streaming. The class FFStreamOpencv is still under development and tuning. This class is using FFmpeg and the source code is not part of this text. Maybe, This topic will be part of some future discussion. 

#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/videoio.hpp>
#include <opencv2/video.hpp>
#include <iostream>

#include "ffstreamopencv.hpp"

using namespace cv;
using namespace std;
int main(int argc, char** argv )
{
   if ( argc != 2 )
    {
        printf("usage: DisplayImage.out <Image_Path>\n");
        return -1;
    }
    Mat image;
    cv::VideoCapture video_capture;
    video_capture.open("/mnt/c/www/town.avi");    
    int cap_fps = video_capture.get(cv::CAP_PROP_FPS);    
    int cap_frame_width = video_capture.get(cv::CAP_PROP_FRAME_WIDTH);
    int cap_frame_height = video_capture.get(cv::CAP_PROP_FRAME_HEIGHT);    
    FFStreamOpencv stream("main", "rtmp://localhost/live/mystream", cap_frame_width, cap_frame_height, 640, 480, 20, 500000);    
    for (;;) {
        Mat image;
        video_capture >> image;
        stream.stream_image(image.data);        
        namedWindow("Display Image", WINDOW_AUTOSIZE );
        imshow("Display Image", image);
        waitKey(0);
    return 0;
    }

FFStreamOpencv class is using my newest FFmpeg build with the following header files included. 

extern "C" {
#include <libavutil/opt.h>
#include <libavcodec/avcodec.h>
#include <libavutil/channel_layout.h>
#include <libavutil/common.h>
#include <libavutil/imgutils.h>
#include <libavutil/mathematics.h>
#include <libavutil/samplefmt.h>

#include <libavformat/avformat.h>
#include <libavcodec/avcodec.h>
#include <libavutil/imgutils.h>
#include <libswscale/swscale.h>
}

I have simple custom CMakeLists.txt to build the app from app.cpp source and FFStreamOpencv.cpp source. You can see include_directories as well as target_link_libraries for my Opencv and FFmpeg.

cmake_minimum_required(VERSION 3.22ma)
set(CMAKE_CXX_STANDARD 11)
project(app)

find_package(OpenCV REQUIRED)
include_directories(${OpenCV_INCLUDE_DIRS})
include_directories( /home/xxx/ffmpeg_build/include )

add_executable( app app.cpp ffstreamopencv.cpp)
target_link_libraries( app /usr/local/lib /home/xxx/ffmpeg_build/lib avcodec avutil
avformat swscale ${OpenCV_LIBS} )

The CMakeLists.txt is used to compile and link my app by commands already used in this text. 

cmake .
make

I currently tuning video streaming to my Nginx server using the x265 codec. I am still not satisfied with the testing. 

x265 [info]: HEVC encoder version 3.5+1-f0c1022b6
x265 [info]: build info [Linux][GCC 11.2.0][64 bit] 8bit+10bit+12bit
x265 [info]: using cpu capabilities: MMX2 SSE2Fast LZCNT SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
x265 [info]: Main profile, Level-3 (Main tier)
x265 [warning]: No thread pool allocated, --wpp disabled
x265 [warning]: No thread pool allocated, --lookahead-slices disabled
x265 [info]: Slices                              : 1
x265 [info]: frame threads / pool features       : 1 / none
x265 [info]: Coding QT: max CU size, min CU size : 64 / 8
x265 [info]: Residual QT: max TU size, max depth : 32 / 1 inter / 1 intra
x265 [info]: ME / range / subpel / merge         : hex / 57 / 2 / 2
x265 [info]: Lookahead / bframes / badapt        : 0 / 0 / 0
x265 [info]: b-pyramid / weightp / weightb       : 0 / 1 / 0
x265 [info]: References / ref-limit  cu / depth  : 3 / on / on
x265 [info]: AQ: mode / str / qg-size / cu-tree  : 2 / 1.0 / 32 / 0
x265 [info]: Rate Control / qCompress            : ABR-500 kbps / 0.60
x265 [info]: tools: rd=2 psy-rd=2.00 rskip mode=1 signhide tmvp fast-intra
x265 [info]: tools: strong-intra-smoothing deblock sao

Conclusion

This text can help you to build Opencv with FFmpeg. Especially, if you try to build OpenCV with custom FFmpeg and fail. Maybe switch directly to install needed FFmpeg libraries by apt install. 

Additionally, This tutorial can help you to build FFmpeg on ubuntu from the source. 

Finally, this text will help you to create a CMake configuration to build and link your Opencv-based app for video processing with Opencv and FFmpeg built from the source. 

Comments

Popular

Opencv GStreamer (windows) video streaming tutorial + full source code for RTSP HLS streaming

Opencv C++ simple tutorial to use GStreamer to send video to Server that converts RTSP to HLS video stream. The source code and all needed configurations are included.  O pencv is a powerful computer vision library. You can use it in production and use it for image and video processing and modern machine learning. In some applications, You may want to stream your processed video results from your C++ OpenCV app outside and not just use a simple OpenCV graphical interface. The video streaming of your results is what you are looking for. Do you want to stream processed video from your IoT device? Yes, This is mainly for Linux. Do you want to stream processed video to the Web player, broadcast the video or just use VLC to play video processed by OpenCV? You may be interested in reading the next lines.  Opencv video stream to VLC or WEB There are basically two main options with OpenCV. The first one is to write a streaming application using FFMPEG. This is a little bit more advanced appro

Opencv Web camera and Video streams in Windows subsystem for Linux WSL, by FFmpeg and GStreamer

Opencv in Windows Subsystem for Linux (WSL) is a compatibility layer designed to running Linux binary executables (in ELF format) natively on Windows 10. I love it. There are some limitations to mention. The first biggest is the lack of support of CUDA, which could be a limitation for deep learning application and learning in WSL. The second trouble for Opencv development is the lack of Web camera support. This suspends WSL almost on a useless level for me until now.  VideoCapture cap;   is not working in WSL for now cap.open(0);  FFMPEG to WSL opencv program and back to WEB browser in windows This Video capture is right now not possible at in Ubuntu running under Windows (WSL). I will hit this limitation in this article. I will show you how to reach a video camera and learn something more about video streaming. Yes, the opencv processed frames will be stream to the web player on simple web site. Check the goal of this opencv tutorial on this video What you will learn

Compile Opencv with GStreamer for Visual Studio 2019 on windows 10 with and contribution modules

The goal of this tutorial is a simple step by step compilation of Opencv 4.2 with contribution extra modules with GStreamer as a bonus. The environment is Windows 10, Visual Studio 2019 C++ application. This took me almost one day of correcting of CMake setting. The goal of this tutorial is: compiled a set of OpenCV libraries with GStreamer and FFmpeg on Windows. I focus mainly on GStreamer. It is a little bit more tricky. You will reach the following information about your Opencv environment by compile and run this simple code. The Opencv GStreamer is turned as YES. GStreamer gives you a great opportunity to stream OpenCV output video outside of your program, for example, web application. I recently compiled with opencv 4.4. The update at the end of the post.  It is working!! wow, The working app and configuration in future tutorials. #include   <opencv2/opencv.hpp> #include   <iostream> using namespace cv; int   main () {      std ::cout <<  &q

Opencv HSL video stream to web

This tutorial will show you all components, configuration, and code needed to steam video output results from Opencv C++ to your Web player. The C++ program will take any video input and process this video. The processed video will be stream outside of OpenCV using the GStreamer pipeline (Windows part). The HLS video produces one Playlist file and several MPEG-TS segments of videos. This several HLS outputs are stored in the Windows file system. I am using WSL 2, windows subsystem for Linux to run Ubuntu distribution. Here the NGINX is installed with the RTMP module. NGINX is distributing a video stream from the windows file system to the web.  Let's have a look at this in more detail.  What is covered? Opencv C++ part + GStreamer pipeline NGINX configuration Architecture Web Player for your HLS stream What is not covered? Detailed instalation of Opencv + Gstreamer more here  GStreamer installation  ,  GStreamer Installation 2  on windows Detailed installation of NGINX + RTMP modul

OpenCV 4.5 simple optical flow GPU tutorial cuda::FarnebackOpticalFlow

This OpenCV tutorial is a very simple code example of GPU Cuda optical flow in OpenCV written in c++. The configuration of the project, code, and explanation are included for farneback Optical Flow method. Farneback algorithm is a dense method that is used to process all the pixels in the given image. The dense methods are slower but more accurate as all the pixels of the image are processed. In the following example, I am displaying just a few pixes based on a grid. I am not displaying all the pixes. In the opposite to dense method the sparse method like Lucas Kanade using just a selected subset of pixels. They are faster. Both methods have specific applications. Lucas-Kanade is widely used in tracking. The farneback can be used for the analysis of more complex movement in image scene and furder segmentation based on these changes. As dense methods are slightly slower, the GPU and Cuda implementation can lead to great performance improvements to calculate optical flow for all pixels o

Opencv C++ Tutorial Mat resize

Opencv Mat Resize   Resize the Mat or Image in the Opencv C++ tutorial. It is obviously simple task and important to learn. This tutorial is visualized step by step and well-described each of them. The main trick is in that simple code. Mat Input; Mat Resized; int ColumnOfNewImage = 60; int RowsOfNewImage = 60; resize(Input, Resized, Size( ColumnOfNewImage , RowsOfNewImage )); This code just takes an Input image and resized save to output Mat. How big is the resized image is based on the Size? Size just contains two parameters. Simple numbers of how the result should be big. The simple number of columns (width) and rows (height). That is basically it. Enjoy                                                 Boring same face again and again.  Load Image, resize and save Opencv C++ #include <Windows.h> #include "opencv2\highgui.hpp" #include "opencv2\imgproc.hpp" #include "opencv2\video\background_segm.hpp" #include &qu

Opencv C++ tutorial : Smoothing, blur, noise reduction / canceling

Smooth or blur, gaussian blur, and noise-canceling, This tutorial will learn OpenCV blur, GaussianBlur, median blur functions in C++. Additionally, the advanced technique for noise reduction  fastNlMeansDenoising family  will be introduced with a code example for each method.   You can use blurring of the image to hide identity or reduce the noise of the image.  Blur can be a very useful operation and it is a very common operation as well. For example, the anonymization of pedestrians, face or is one possible target for blue operation. The blur is the most common task to perform over the image to reduce noise. The noise reduction is more task for Gaussian blur than for simple blur operation. The various blur operations are very common for image processing on mobile devices.  The more important is the robustness issues of the data in pre-processing for machine learning. Sometimes, by blurring the images of the dataset can have a positive effect on the robustness of the achieved de

Opencv tutorial RTMP video streaming to NGINX restream as HLS

Video streaming Tutorial of sending processed Opencv video to NGINX and distributing video from NGINX (broadcast) by HLS stream for a wider audience, like multiple web players, VLC, or any other video stream receiver. Opencv application HLS streaming by GStreamer and NGINX  We will use GStreamer to send video from the Opencv application by rtmp2sink to an RTMP module in NGINX. In our example, the server is a widely used NGINX server with an Nginx-RTMP-module. The NGINX will receive RTMP video from Opencv and restream as an HLS video stream considered for multiple end consumers. This is a follow-up to the previous article about Video streaming from Opencv to RtspSimpleServer by rtsp protocol.   The goal is the same. Send video from Opencv to the server and restream the video for a wider audience. The difference is that RtspSimpleServer running on windows, NGINX is running in docker (WSL2). The one-to-one communication between Opencv and RtspSimpleServer was established by RTSP protocol

Opencv 4 C++ Tutorial simple Background Subtraction

This method is used to learn what belongs to the background of the image and what belongs to the foreground. The static cameras that monitor the area can very easily recognize, what is part of the image that is always here or there is something that is new and moving over the background.  Background subtraction Visual studio 2019 project setup If you have Opencv 4+ compiled or installed only steps you need to do is set the include directory with OpenCV header files. Set the Additional library Directories that point to \lib folder. Just note that Visual Studio 2019 should use VC16 \lib. Finally, As additional dependencies, specify the libs used to resolve the function implementation in the code. The list for Opencv 420 is here. The different version of opencv is using different numbering for example opencv 440 will use opencv_core440.lib.  opencv_bgsegm420.lib opencv_core420.lib opencv_videoio420.lib opencv_imgproc420.lib opencv_highgui420.lib opencv_video420.lib  Background sustract

Opencv VideoCapture File, Web Camera, RTSP stream

Opencv VideoCapture File, Camera and stream Opencv tutorial simple code in C++ to capture video from File, Ip camera stream and also the web camera plug into the computer. The key is to have installed the FFMPEG especially in case of reading the stream of IP cameras. In windows just use Opencv Installation by Nugets packages  Here . Simple easy under 2 minutes of installation. In Linux, you need to follow the instruction below. If you are on Debian Like package system. Under Fedora Red hat dist just use a different approach. Code is simple and installation is the key..  Windows use nugets packages Linux you have to install and build Opencv With FFMPEG. Also simple.  It is easy to capture video in OpenCV Video capture  in OpenCV is a really easy task, but for a little bit experienced user.  What is the problem? The problem is the installation of Opencv without recommended dependencies. Just install all basic libs that are recommended on the website. # Basic packa