In this article, we look at various demo programs showcasing the project. For example: web-cam capturing and viewing via OpenGL rendering, the way of using of Enhanced Video Renderer for rendering of live video via CaptureManager COM server, the way of DirectShow filter development for integrating CaptureManager SDK into 3rd party webcam applications (like Skype or Zoom).
During long time of development of CaptureManager
project, I made significant progress and got many questions about implementing it into the other projects, including open source projects. After giving it some thought, I decided to publish the source code of CaptureManager
SDK under open source license for free use by the friendly society of software developers. The source code of CaptureManager
SDK can be downloaded using this link.
I would be glad to receive your responses about its usefulness, as well as your ideas about improving functionality of SDK and recommendations for other developers by the following links:
The most interesting demo programs are listed below:
- WPFVirtualCamera -
DirectShow
filter for integrating CaptureManager
SDK into 3rd party webcam applications (like Skype or Zoom) - WPFStreamer (live video and audio streaming on Facebook (SSL encrypted connection: Facebook requires all live video streams to go over RTMPS, or the Real Time-Messaging Protocol (RTMP) on a TLS/SSL connection - this will help protect streaming content via encryption, thereby ensuring a greater level of data integrity) and YouTube sites) - Freeware version.
- WPFRTSPClient - Freeware version
- WPFRtspServer - Freeware version
CaptureManager SDK Versions
CaptureManager Help File
Introduction
This article presents my project for capturing of video and audio sources on Windows OS by Microsoft Media Foundation.
I have spent much time on resolving different tasks with processing of video and audio data, and researched techniques for capturing of video and audio on Windows family OSs. DirectShow was the main technique for capturing live-video on Windows family OSs for long time. However, since Vista OS Microsoft has introduced a new framework for working with video and audio - Microsoft Media Foundation, and since Windows7 OS, it has features for working with webcam via USB Video Class driver and line audio input, I included that technology into many projects and wrote two articles about it:
Those articles present simple libraries for capturing webcams and grabbing of image frames, but Microsoft Media Foundation is a much more powerful framework. After some thinking, I decided to write a new solution for working with live-video and audio which can be more flexible than my previous solutions, and can take much more power (or force :)) of Microsoft Media Foundation. The solution can compete with Microsoft solution. The solution, which can be called the best anyone has seen before by any developer.
Background
I got an idea to write a new solution for working with webcams on basement of Microsoft Media Foundation when faced with one unusual task. So, the task was not resolved, but I had written some code and had decided to continue development of the solution.
At the beginning, the solution included only few classes and allowed to execute only few functions, but after adding some demands for this solution, I decided to write a simple SDK which allows to make capture configuration for the new tasks easy and to inject a new developed code into it by implementation of the Microsoft Media Foundation's and CaptureManager
's interfaces.
As a result, I got this SDK for capturing, recording and steaming of live-video and audio from web-cams only by Microsoft Media Foundation.
Using the Code
This demo program presents only simple configuration of webcam capturing and viewing via OpenGL rendering.
To use CaptureManager
, it needs to call the suitable interfaces from CaptureManager
COM server. For setting of print output log destination, it needs to get ILogPrintOutControl
interface:
CComPtrCustom<iclassfactory> lCoLogPrintOut;
HRESULT lhresult = CoGetClassObject
(CLSID_CoLogPrintOut, CLSCTX_INPROC_SERVER, nullptr, IID_PPV_ARGS(&lCoLogPrintOut));
if (FAILED(lhresult))
return lhresult;
CComPtrCustom<ilogprintoutcontrol> lLogPrintOutControl;
lCoLogPrintOut->LockServer(true);
lhresult = lCoLogPrintOut->CreateInstance(
nullptr,
IID_PPV_ARGS(&lLogPrintOutControl));
if (FAILED(lhresult))
return lhresult;
lhresult = lLogPrintOutControl->addPrintOutDestination(
(DWORD)INFO_LEVEL,
L"Log.txt");
if (FAILED(lhresult))
return lhresult;
For getting main methods of the CaptureManager
, it needs to get ICaptureManagerControl
interface:
CComPtrCustom<IClassFactory> lCoCaptureManager;
lhresult = CoGetClassObject
(CLSID_CoCaptureManager, CLSCTX_INPROC_SERVER, nullptr,
IID_PPV_ARGS(&lCoCaptureManager));
if (FAILED(lhresult))
return lhresult;
lCoCaptureManager->LockServer(true);
CComPtrCustom<ICaptureManagerControl> lCaptureManagerControl;
lhresult = lCoCaptureManager->CreateInstance(
nullptr,
IID_PPV_ARGS(&lCaptureManagerControl));
if (FAILED(lhresult))
return lhresult;
The topology of this demo program is presented in the next schema:
You can get this demo program from the below link:
This demo program presents the way of using of Enhanced Video Renderer for rendering of live video via CaptureManager
COM server.
This code allows to easily integrate this CaptureManager
SDK into already created Windows application - it needs only handler on element of GUI which is used as webcam display. This code controls resizing of GUI element and modify renderer for changing of video size and proportion.
You can get this demo program from the below link:
This demo program presents the way of linking CaptureManager.dll with C# project. In this project, CaptureManagerSDK
is wrapped by C# class CaptureManager
. This class hides direct working with COM interface and marshaling of image data from unmanaged code. For flexibility of solutions, the class uploads CaptureManager.dll directly into the process and gets COM objects WITHOUT calling of COM's infrastructure - it allows to use CaptureManager
COM server WITHOUT any registration in system.
You can get this demo program from the following link:
This demo program presents the way of collecting information from sources and presenting it in the readable form. Information from the sources is presented in the form of XML document. There are some reasons for using of XML format for transferring information from COM server:
- Simple structure - Microsoft Media Foundation uses a huge amount of GUID constant and types which you need to know for understanding of the info, but in this solution, all of them are presented into the friendly form.
- Easy transfer from COM server
- Easy parsing - XPath expression allows to get almost any needed information and present it, it is easier than working with
static
defined classes and lists of info. - In most cases, user needs to read which features are supported by source and select one of it.
- XML document is easily integrated source into code of different presentation models (for instance - MVVM)
This demo program can be got from the below links:
This demo program presents the way of viewing live video from webcam via COM Sever on C# - WPF by calling of video frames from WPF thread. This code gets XML document from CaptureManager
and pares it for getting some information: name of device, type of data stream and list of supported resolutions. The code launches grabbing of video frames via ISampleGrabber
interface and works in sync mode.
You can get this demo program from the below links:
This demo program presents the way of viewing live video from webcam via COM Sever on C# - WPF by calling of video frames from CaptureManager
thread. This code gets XML document from CaptureManager
and pares it for getting some information: name of device, type of data stream and list of supported resolutions. The code registers the update
method into the class with ISampleGrabberCallback
interface and injects this class into COM Server. While CaptureManager
gets a new video frame, it calls WPF update
method and posts a message to update of frame. In this case, the thread of WPF is not overloaded by grabbing task.
This demo program can be got from the below links:
This demo program presents the way viewing live video from webcam via COM Sever on C# - WPF by using of Enhanced Video Renderer from CaptureManager
thread. This code gets XML document from CaptureManager
and pares it for getting some information: name of device, type of data stream and list of supported resolutions. The code gets EVR node from CaptureManager
by setting of HWND
of the integrated WindowForms
panel. In this case, all working with video frames is executed behind the WPF thread.
This demo program can be got from the below links:
This demo program presents the way of working with CaptureManager
SDK for capturing, encoding and recording / broadcasting video - audio from web cameras, microphones, desktop screens and speakers via COM Sever on C# - WPF.
Code of this demo program presents the correct algorithm of working with CaptureManager
SDK via COM Server interfaces. The final point of data stream can be file sink or byte stream sink. The last one is a more interesting opportunity than simply saving into the file. In this demo program, byte stream sink is implemented in the form of TCP Server with Loopback address and http port: 8080. A similar implementation can be found in the demo program on C++, but this one has one significant difference - in this demo program, the TCP Server is implemented in C#-.NET Framework. CaptureManager
SDK is written in C++ and uses Windows Media Foundation on C, however, there is good news - CaptureManager
SDK needs implementation of Windows Media Foundation interface IMFByteStream
for streaming of bytes. Implementation C interface IMFByteStream
on C# does not need any specific Windows Media Foundation function - it needs defining interfaces: IMFByteStream
, IMFAsyncCallback
, IMFAsyncResult
; and enum
constants:
MFAsyncCallbackQueue
MFASync
MFByteStreamSeekingFlags
MFByteStreamSeekOrigin
MFByteStreamCapabilities
UnmanagedNameAttribute
Implementation of those interfaces can be found in file NetworkStreamSink.cs, but I would like to point your attention on the next moments:
- In
IMFByteStream
interface, there are four implemented methods:
GetCapabilities
Write
BeginWrite
EndWrite
In the first one, the code sets type of implementation of IMFByteStream
interface - writeable, but not seekable. Method Write
is used for sync writing data into the stream, while methods BeginWrite
, EndWrite
are used for async writing. However, there are some important moments: method Write
is called once at the start - it writes the head of metadata streams: type of encoder, amount of streams, names of streams and other metadata. Async writing needs to execute the methods in the next order: BeginWrite
, argument IMFAsyncCallback pCallback.Invoke
, EndWrite
, but calling of methods BeginWrite
, EndWrite
can be locked by the same mutex. It means that argument IMFAsyncCallback pCallback.Invoke
must be executed in the separated thread - for example, by ThreadPool.QueueUserWorkItem
.
- In the implementation of TCP Server, I have used
async
calling BeginAcceptTcpClient
, and writing head data at the start of each connection - it allows to connect any amount of client to the media stream server.
public void Start()
{
try
{
tcpListener = new TcpListener(Configuration.IPAddress, Configuration.Port);
tcpListener.Start();
tcpListener.BeginAcceptTcpClient(
new AsyncCallback(callBack),
tcpListener);
}
catch (Exception e)
{
}
}
public void Stop()
{
try
{
tcpListener.Stop();
foreach (var item in mClientBag)
{
item.Value.Client.Close();
item.Value.Client.Dispose();
item.Value.Close();
}
tcpListener.Server.Dispose();
}
catch (Exception e)
{
}
}
private void callBack(IAsyncResult aIAsyncResult)
{
TcpListener ltcpListener = (TcpListener)aIAsyncResult.AsyncState;
if (ltcpListener == null)
return;
TcpClient lclient = null;
try
{
lclient = ltcpListener.EndAcceptTcpClient(aIAsyncResult);
}
catch (Exception exc)
{
return;
}
if (lclient != null && lclient.Client.Connected)
{
StreamReader streamReader = new StreamReader(lclient.GetStream());
StringBuilder receivedData = new StringBuilder();
while (streamReader.Peek() > -1)
receivedData.Append(streamReader.ReadLine());
string request = GetRequest(receivedData.ToString());
if (!SuportedMethod(request))
{
SendError(StatusCode.BadRequest, "Only GET is supported.", lclient);
lclient.Client.Close();
lclient.Close();
}
else
{
Socket socket = lclient.Client;
if (socket.Connected)
{
SendHeader(StatusCode.OK, lclient);
lock (this)
{
if (mHeaderMemory != null)
{
int sentBytes = socket.Send(mHeaderMemory);
}
mClientBag[lclient] = lclient;
}
}
}
}
ltcpListener.BeginAcceptTcpClient(
new AsyncCallback(callBack),
ltcpListener);
}
- The head includes MIME type of byte stream, which allows to use in the future release the same solution for any type of media container - ASF, MP4, MKV.
private void SendHeader(string mimeType, long totalBytes,
StatusCode statusCode, TcpClient aTcpClient)
{
StringBuilder header = new StringBuilder();
header.Append(string.Format("HTTP/1.1 {0}\r\n", GetStatusCode(statusCode)));
header.Append(string.Format("Content-Type: {0}\r\n", mimeType));
header.Append(string.Format("Accept-Ranges: bytes\r\n"));
header.Append(string.Format("Server: {0}\r\n", Configuration.ServerName));
header.Append(string.Format("Connection: close\r\n"));
header.Append(string.Format("Content-Length: {0}\r\n", totalBytes));
header.Append("\r\n");
SendToClient(header.ToString(), aTcpClient);
}
This demo program can be obtained from the below links:
This demo program presents the way of working with CaptureManager
SDK for capturing, encoding and recording / broadcasting live video and audio from web cameras, microphones, desktop screens and speakers via COM Sever on Qt framework. On Windows OS, there is a version of Qt for Visual Studio Compiler, but this demo presents using of CaptureManager
SDK with Qt version for MinGW Compiler. Of course, this demo can be recompiled for Visual Studio Compiler, but version for MinGW
Compiler shows flexibility of this SDK and compatibility with many other compilers. This demo includes code for capturing live video from web cameras and viewing by calling of samples, by callback of view update code from CaptureManager
SDK inner thread or by direct drawing image via HWND
of widget.
Another example in this demo presents a way for connection of sources, encoders and sinks into the one pipe line of processing. This demo presents way for recording video and audio into the file and real code for writing of network stream broadcasting into the Internet code by Qt framework QTcpServer
and QTcpSocket
classes.
You can get this demo program from the following link:
This demo program presents the way of working with CaptureManager
SDK for capturing of live video from web cameras in Python application on Windows OS. CaptureManager
is built with using of Microsoft Media Foundation on Desktop Windows OSs and COM technology. As any COM server, CaptureManager
can be integrated into the project via direct calling interfaces or type library. However, dynamic type programming languages have some problems with correct using type libraries. For resolving such problems, I included IDispatch
and wrote implementation for many classes, but some projects need working with pointers on massive of memory and it could be a problem with dynamic types. For simplifying the solution, I implemented IDispatch
interface only for limited functionality. This demo program presents functionality for selections of sources, encoders and rendering video via HWND
and saving into the file.
This demo program can be downloaded from the below link:
This demo program presents the way of working with CaptureManager
SDK for capturing single frame from video stream. CaptureManager
SDK includes new mode for SampleGrabberCallSinkFactory
- PULL
.
This mode differs from SYNC
and ASYNC
modes. SYNC
and ASYNC
modes work in continuous mode - they send a request for the next sample after receiving the current one in automatic mode without blocking. It leads queue of requests and samples like in the image:
This type is useful for presenting of video, but the task of capturing the single frame is faced with the next difficulty - 99 percents of frames are captured, decoded and delivered to the sink, but customer's code does not take it from sink. This task can be in reality - for example, taking single frame each one second while camera produces 30 frames per second - it this case, CPU wastes time and power on 29 frames which will throw out into the garbage. For such task new mode - PULL
can be more useful. The sink sends request only while customer's code calls for the new single frame:
It allows to utilize CPU power more effectively only for one task. The new mode can be useful also for many projects of image recognition - such projects usually process frames with frame rate low than can be produced by cameras, but video pipeline will spend much CPU power on frames, which will not be processed. PULL
mode allows release some CPU power from video pipeline for other parts of program.
This demo program can be downloaded from the links below:
This demo program presents the way of working with CaptureManager
SDK for capturing the last serial frames from video stream. CaptureManager
SDK 1.2.0 includes new type of stream control node - sample accumulator node:
It includes two sample accumulator nodes on 5 and 10 samples which collect/accumulate 5 or 10 LAST SAMPLES. The idea is that in the real world, taking of photo does not get executed in the moment of viewing of event. Human reaction on event, process of pressing on take photo button, sending event from hardware layer into layer of customer's code and requesting of the single frame take some time - from 500 ms till 1 second. It leads to loss of the important moment which was the reason of taking of photo. Sample accumulator nodes can compensate such time delay by accumulating the last samples in video stream. These samples can be gotten by PULL
mode of SampleGrabberCallSinkFactory
. It sends the request:
and receives the last samples:
It can be useful for resolving some problems, but accumulation of samples needs memory buffer which can be expanded fast - for example, for keeping of 10 last samples of RGB32 image of 1080p format it needs about 60 MByte - one second accumulation in such video can need about 200 MByte or more. Of course, it is possible to create a sample accumulator with different schema - for example, one by third - it means keeping only each third sample from video stream.
You can get this demo program from the below links:
This demo program presents the way of linking CaptureManager.dll with Java VM. This project presents Java wrapper of COM CaptureManager
via calling JNI functions. It includes CaptureManagerToJavaProxy
framework for reflection Java code on C code of CaptureManagerNativeProxy.dll. CaptureManagerToJavaProxy
framework includes code for calling native code from JAR - it allows build runnable JAR file with packed CaptureManagerNativeProxy.dll and CaptureManager.dll. This project uses Microsoft Media Foundation - it limits Java projects ONLY Windows OSs since from Windows 7.
This proxy program can be got from the below links:
This demo program presents the way of working with web camera on WindowsOSs via Java VM. This demo program allows present live video from web camera via Javax.swing
GUI framework. It allows to get HWND
descriptor of GUI component and use EVR for rendering without marshaling images via Java VM JNI - it allows safe CPU power:
In addition, this demo program includes functionality for recording video from web camera into the ASF video file:
You can test this demo program by compiling runnable JAR file or downloading CaptureManagerSDKJavaxDemo.jar.zip which contains CaptureManagerSDKJavaxDemo.jar. It could be an important point your attention on architecture of VM - x86 and x86-64. In most situations, Java compiled code can be executed on both architectures. However, CaptureManagerToJavaProxy
calls native code via JNI - it leads to the next problems: Native DLL with JNI for x86 cannot be uploaded into x86-64 Java VM, and Native DLL with JNI for x86-64 cannot be uploaded into x86 Java VM. These problems are resolved by finding of Java VM architecture at runtime and uploading CaptureManagerNativeProxy.dll of the suitable architecture into the Java VM. It means that JAR includes TWO CaptureManagerNativeProxy.dll and TWO CaptureManager.dll.
I think that you will find it is very suitable - of course, it is workable ONLY on WindowsOSs since Windows 7.
This demo program can be downloaded from the below links:
This demo program presents the way of working with web camera on WindowsOSs on WindowsForms
GUI.
This demo program allows present live video from web camera via WindowsForms
GUI framework. It allows get HWND
descriptor of GUI component and use EVR for rendering without marshaling data via C# - it allows safe CPU power:
This demo shows how to select different types of sources. At the present time, CaptureManager
supports three source types:
Webcam
Screen DirectShow
Crossbar
These sources are different, and it is possible to have connected some of these source type devices at the same time. It leads to need a solution for flexible selecting the suitable group of source type. The next code of generating XPath query implements such functionality in this demo:
string lXPath = "//*[";
if(toolStripMenuItem1.Checked)
{
lXPath += "Source.Attributes/Attribute
[@Name='MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE_VIDCAP_CATEGORY']
/SingleValue[@Value='CLSID_WebcamInterfaceDeviceCategory']";
}
if (toolStripMenuItem2.Checked)
{
if (toolStripMenuItem1.Checked)
lXPath += "or ";
lXPath += "Source.Attributes/Attribute
[@Name='MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE_VIDCAP_HW_SOURCE']
/SingleValue[@Value='Software device']";
}
if (dSCrossbarToolStripMenuItem.Checked)
{
if (toolStripMenuItem1.Checked || toolStripMenuItem2.Checked)
lXPath += "or ";
lXPath += "Source.Attributes/Attribute
[@Name='MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE_VIDCAP_CATEGORY']
/SingleValue[@Value='CLSID_VideoInputDeviceCategory']";
}
lXPath += "]";
In addition, this demo includes code for recording video and audio from media sources:
This demo program can be obtained from the below links:
This demo program presents the way of working of custom Microsoft Media Foundation Transform with CaptureManager
. This demo program allows 'inject
' image into the live video via special custom Microsoft Media Foundation Transform which works like 'effect filter'.
You can get this demo program from the below link:
This demo program presents the way of working of custom Microsoft Media Foundation Transform with CaptureManager
. This demo program allows 'inject' text string into the live video via special custom Microsoft Media Foundation Transform which works like 'effect filter' and change its content.
The text string content can be changed from application side in continuous mode with displaying result immediately. In this demo program, it is done by the following code:
wchar_t ltext[MAXBYTE];
while (!bQuit)
{
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE))
{
if (msg.message == WM_QUIT)
{
bQuit = TRUE;
}
else
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
_itow_s(++lFrameCount, ltext, 10);
lITextWriter->writeText(ltext);
Sleep(200);
}
You can get the demo program from the below link:
This demo program presents the way of working with ICaptureProcessor, IInitilaizeCaptureSource
, ICurrentMediaType, ISourceRequestResult
interfaces of CaptureManagerToCSharpProxy
. This demo program implements ICaptureProcessor
interface in ImageCaptureProcessor
class - this class allows 'inject' customized data into the capture session. ICaptureProcessor
interface has the following methods:
void initilaize(IInitilaizeCaptureSource IInitilaizeCaptureSource)
void pause()
void setCurrentMediaType(ICurrentMediaType aICurrentMediaType)
void shutdown()
void sourceRequest(ISourceRequestResult aISourceRequestResult)
void start(long aStartPositionInHundredNanosecondUnits, ref Guid aGUIDTimeFormat)
void stop()
Method void initilaize(IInitilaizeCaptureSource IInitilaizeCaptureSource)
must set in IInitilaizeCaptureSource
argument XML text string
with the description of capture source - this data is obtained from image info. Method void setCurrentMediaType(ICurrentMediaType aICurrentMediaType)
allows select correct stream index and correct media type. Method void sourceRequest(ISourceRequestResult aISourceRequestResult)
is executed by CaptureManage
SDK for getting of raw data in format which is defined in XML text string with the description. The format of such XML document has the following form:
<presentationdescriptor streamcount="1">
<presentationdescriptor.attributes title="Attributes of Presentation">
<attribute description="Contains the unique symbolic link for a
video capture driver." guid="{58F0AAD8-22BF-4F8A-BB3D-D2C4978C6E2F}"
name="MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE_VIDCAP_SYMBOLIC_LINK"
title="The symbolic link for a video capture driver.">
<singlevalue value="ImageCaptureProcessor">
</singlevalue></attribute>
<attribute description="The display name is a human-readable string,
suitable for display in a user interface."
guid="{60D0E559-52F8-4FA2-BBCE-ACDB34A8EC01}"
name="MF_DEVSOURCE_ATTRIBUTE_FRIENDLY_NAME"
title="The display name for a device.">
<singlevalue value="Image Capture Processor">
</singlevalue></attribute>
</presentationdescriptor.attributes>
<streamdescriptor index="0" majortype="MFMediaType_Video"
majortypeguid="{73646976-0000-0010-8000-00AA00389B71}">
<mediatypes typecount="1">
<mediatype index="0">
<mediatypeitem description="Width and height of a video frame,
in pixels." guid="{1652C33D-D6B2-4012-B834-72030849A37D}"
name="MF_MT_FRAME_SIZE" title="Width and height of the video frame.">
<value.valueparts>
<valuepart title="Width" value="Temp_Width">
<valuepart title="Height" value="Temp_Height">
</valuepart></valuepart></value.valueparts>
</mediatypeitem>
<mediatypeitem description="Approximate data rate of the
video stream, in bits per second, for a video media type."
guid="{20332624-FB0D-4D9E-BD0D-CBF6786C102E}"
name="MF_MT_AVG_BITRATE"
title="Approximate data rate of the video stream.">
<singlevalue value="33570816">
</singlevalue></mediatypeitem>
<mediatypeitem description="The major type defines the
overall category of the media data."
guid="{48EBA18E-F8C9-4687-BF11-0A74C9F96A8F}"
name="MF_MT_MAJOR_TYPE" title="Major type GUID for a media type.">
<singlevalue guid="{73646976-0000-0010-8000-00AA00389B71}"
value="MFMediaType_Video">
</singlevalue></mediatypeitem>
<mediatypeitem description="Default surface stride,
for an uncompressed video media type.
Stride is the number of bytes needed to go from one row of pixels
to the next." guid="{644B4E48-1E02-4516-B0EB-C01CA9D49AC6}"
name="MF_MT_DEFAULT_STRIDE" title="Default surface stride.">
<singlevalue value="Temp_Stride">
</singlevalue></mediatypeitem>
<mediatypeitem description="Specifies for a media type
whether the samples have a fixed size."
guid="{B8EBEFAF-B718-4E04-B0A9-116775E3321B}"
name="MF_MT_FIXED_SIZE_SAMPLES"
title="The fixed size of samples in stream.">
<singlevalue value="True">
</singlevalue></mediatypeitem>
<mediatypeitem description="Frame rate of a video media type,
in frames per second."
guid="{C459A2E8-3D2C-4E44-B132-FEE5156C7BB0}"
name="MF_MT_FRAME_RATE" title="Frame rate.">
<ratiovalue value="10.0">
<value.valueparts>
<valuepart title="Numerator" value="10">
<valuepart title="Denominator" value="1">
</valuepart></valuepart></value.valueparts>
</ratiovalue>
</mediatypeitem>
<mediatypeitem description="Pixel aspect ratio for a
video media type." guid="{C6376A1E-8D0A-4027-BE45-6D9A0AD39BB6}"
name="MF_MT_PIXEL_ASPECT_RATIO" title="Pixel aspect ratio.">
<ratiovalue value="1">
<value.valueparts>
<valuepart title="Numerator" value="1">
<valuepart title="Denominator" value="1">
</valuepart></valuepart></value.valueparts>
</ratiovalue>
</mediatypeitem>
<mediatypeitem description="Specifies for a media type
whether each sample is independent of the other samples
in the stream." guid="{C9173739-5E56-461C-B713-46FB995CB95F}"
name="MF_MT_ALL_SAMPLES_INDEPENDENT" title="Independent of samples.">
<singlevalue value="True">
</singlevalue></mediatypeitem>
<mediatypeitem description="Specifies the size of each sample,
in bytes, in a media type."
guid="{DAD3AB78-1990-408B-BCE2-EBA673DACC10}"
name="MF_MT_SAMPLE_SIZE"
title="The fixed size of each sample in stream.">
<singlevalue value="Temp_SampleSize">
</singlevalue></mediatypeitem>
<mediatypeitem description="Describes how the frames in a
video media type are interlaced."
guid="{E2724BB8-E676-4806-B4B2-A8D6EFB44CCD}"
name="MF_MT_INTERLACE_MODE"
title="Describes how the frames are interlaced.">
<singlevalue value="MFVideoInterlace_Progressive">
</singlevalue></mediatypeitem>
<mediatypeitem description="The subtype GUID defines a
specific media format type within a major type."
guid="{F7E34C9A-42E8-4714-B74B-CB29D72C35E5}"
name="MF_MT_SUBTYPE"
title="Subtype GUID for a media type.">
<singlevalue guid="{Temp_SubTypeGUID}">
</singlevalue></mediatypeitem>
</mediatype>
</mediatypes>
</streamdescriptor>
</presentationdescriptor>
Temp_SubTypeGUID
defines image format, Temp_SampleSize
is byte size of image, Temp_Stride
is stride in byte or 0 for compressed formats, Temp_Width
and Temp_Height
are pixel width and height of image.
You can download this demo program from the below links:
This demo program presents the way of working with ICaptureProcessor
, IInitilaizeCaptureSource
, ICurrentMediaType
, ISourceRequestResult
interfaces of CaptureManagerToCSharpProxy
. This demo program implements ICaptureProcessor
interface in IPCameraMJPEGCaptureProcessor
class - this class allows 'inject' MJPEG frames from IP camera. This class connects to the selected web camera by HTTP web socket and redirect the received data into the capture session.
ICaptureProcessor
interface has the below methods:
void initilaize(IInitilaizeCaptureSource IInitilaizeCaptureSource)
void pause()
void setCurrentMediaType(ICurrentMediaType aICurrentMediaType)
void shutdown()
void sourceRequest(ISourceRequestResult aISourceRequestResult)
void start(long aStartPositionInHundredNanosecondUnits, ref Guid aGUIDTimeFormat)
void stop()
Method void initilaize(IInitilaizeCaptureSource IInitilaizeCaptureSource)
must set in IInitilaizeCaptureSource
argument XML text string with the description of capture source - this data is got from image. Method void setCurrentMediaType(ICurrentMediaType aICurrentMediaType)
allows select correct stream index and correct media type. Method void sourceRequest(ISourceRequestResult aISourceRequestResult)
is executed by CaptureManage
SDK for getting of raw data in format which is defined in XML text string with the description.
This demo program can be got from the following links:
This demo program presents the way of working with IEVRMultiSinkFactory
and IEVRStreamControl
interfaces of CaptureManagerToCSharpProxy
. This demo program shows way for creating of the EVR sink from Direct3DSurface9
interface. WPF allows present DirectX texture as regular image via System.Windows.Interop.D3DImage
class. However, the demand for compatibility on wide range Windows OSs allows use only old DirectX technique - DirectX9. This code creates render target texture of Direct3DSurface9
interface and CaptureManager
uses this texture a destination surface. This solution allows to easily integrate the rendering result with the WPF GUI components.
This demo program shows how to use scaling and setSrcPosition
method of IEVRStreamControl
interface for effective implementation of zooming and defining Region Of Interest (ROI) and controlling them.
setSrcPosition
method has arguments Left
, Right
, Top
, Bottom
which define corners of ROI Rectangle in relative value from 0.0f to 1.0f. This demo uses Screen Capture source for correct working without accessible USB cameras.
This demo program can be got from the following links:
This demo program presents the way of working with IEVRMultiSinkFactory
and IEVRStreamControl
interfaces of CaptureManagerToCSharpProxy
. This demo program shows a way for creating two EVR sinks and controlling them.
This demo shows how connect them with different capture sessions and control of destination: size, position, scale, flush. This demo uses Screen Capture sources for correct working without accessible USB cameras.
You can get this demo program from the following links:
This demo program presents the way of working with ICaptureProcessor, IInitilaizeCaptureSource, ICurrentMediaType, ISourceRequestResult,
IEVRMultiSinkFactory
and IEVRStreamControl
interfaces of CaptureManagerToCSharpProxy
. This demo program shows a way for creating of two EVR sinks, controlling them and connecting them with two instances of IPCameraMJPEGCaptureProcessor
class.
You can get this demo program from the following links:
This demo program presents the way of working with CaptureManager
SDK for capturing of live video and audio from sources, encoding them and recording them into the file with preview of live video. Connecting of components is presented on the next schema:
Demo program has console interface:
This demo program can be got from the below link:
This demo program presents the way of working with CaptureManager
SDK for capturing of live video and audio from sources, encoding them and recording them into the file with preview of live video. Connecting of components is presented on the next schema:
Demo program has WPF GUI and allows to control more correctly:
This demo program can be got from the following links:
This demo program presents the way of working with CaptureManager
SDK for rendering video from media file by connecting it to the code of MediaFoundation
player. The original video renderer of MediaFoundation
has a limited functionality and can be used for simple programs, but for more complex solutions, it needs development of a renderer with the needed functionality. CaptureManager version 1.7.0 Freeware has supporting video rendering with the next functionality:
- MultiSink video rendering - it means that it is possible to render more than one video stream on the one video context. Each video rendering sink has independent control from others.
- It can use one of three video contexts:
- Handler on window which is rendering target:
- DirectX9 Texture - render target texture which can be used for texturing of mesh for final rendering:
- DirectX9 SwapChain - render target texture from swap chain of DirectX9 renderer:
- It allows control position, size and background color of rendering.
This demo program can be obtained from the following link:
This demo program presents the way of working with CaptureManager
SDK for rendering video from media file by connecting it to the code of MediaFoundation
player.
This demo program has WPF interface and show how correct create group of MediaFoundation
players with CaptureManager MultiSink
video renderers. It supports CaptureManager version 1.7.0 Freeware:
This demo program can be got from the following link:
This demo program presents the way of working with CaptureManager
SDK in Unity Game Development SDK on Windows OSs. CaptureManager
SDK is included into the Unity Game Development SDK by plugin which allows move pointer of DirectX9 Rendering Texture into the C++ code for setting it like rendering target. C# code of the Unity Game Development SDK gets text string from CaptureManager
SDK and displays information about sources.
This demo program can be got from the following links:
This demo program presents the way for recording of video and audio data from many sources into the one file. CaptureManager
supports flexible management of connection of many sources to many sinks. ASF sink implements Windows OS supporting of ASF Microsoft media format - special format for Microsoft platforms with many features. This demo program allows test one of these features - multi stream recording. It is possible select many sources for recording. Moreover, these sources can include more than one video source. It allows record synchronized video from many live video sources into the one media file.
Schema of source connection has the next view:
VideoLAN
player can open such multi video stream file:
This demo program can be got from the following links:
This demo program presents the way for modify "Screen Capture" source by adding of "normalize" mode. CaptureManager
has functionality for capturing image from the screen of display. CaptureManager
supports multi-display outputs and can create live video stream from "Screen Capture" source with two modes: regular and normalize. "Regular" mode is mode by default for "Screen Capture" source - it takes image from displays as-is - without any post processing:
"Landscape
" image is taken in normal view:
"Flipped Landscape
" image is taken as image rotated on 180 degrees:
"Portrait
" image is taken as image rotated on 90 degrees:
"Flipped Portrait
" image is taken as image rotated on 270 degrees:
It is done for keeping of the maximum quality of video encoding.
"Normalize
" mode allows overcome problem with different proportion of screens by executing of image post processing operations. At the present moment, it supports one normalize - "Landscape
" - " --normalize=Landscape"
- it normalizes any image to the one type of orientation - top in upper part of image, bottom in lower part of image. it can be done in code:
if ((bool)mNormalizeChkBtn.IsChecked)
lextendSymbolicLink += " --normalize=Landscape";
Enabling this mode leads to the next result:
"Landscape
" image is taken in normalize post processing:
"Flipped Landscape
" image is taken in normalize post processing:
"Portrait
" image is taken in normalize post processing:
"Flipped Portrait
" image is taken in normalize post processing:
You can get this demo program from the following links:
This demo program presents the way for setting of the "Screen Capture" source's options.
"Cursor
" option's extension is expanded by new attribute "Shape
" - it has the following values:
- "
Rectangle
" - shape of filling is rectangle - "
Ellipse
" - shape of filling is ellipse
It allows to set the rectangle shape of back image:
or ellipse shape of back image:
"Clip
" option allows define "Region Of Interest
" on the desktop screen and clip it to the output video stream:
XML of the new option has the form:
<Option Type='Clip' >
<Option.Extensions>
<Extension Left='0' Top='0' Height='100' Width='100' />
</Option.Extensions>
</Option>
This demo program can be got from the following links:
This demo program presents the way for setting of the "Screen Capture" source's options for capturing screen view of the selected application's window.
Selecting of the source application window is started by click on "Select by click" button - it initializes Windows OS API for getting HWND
handler of window which is under mouse pointer - the name of window which is under of the mouse pointer is displayed under "Select by click" button:
Catching of the HWND
handler of the window is done by pressing of "L" keyboard button. It allows to get size of the catching window and select frame rate:
After selection of the window, it is possible apply "Screen Capture" source's options for setting of the cursor back mask and clipping region of interest:
Setting of HWND
of the source screen view window is done by adding argument "--HWND
" to "Screen Capture" symbolic link:
lextendSymbolicLink += " --HWND=" + m_CurrentHWND.ToInt32().ToString();
This demo program can be downloaded from the following links:
This demo program presents the way of working with ICaptureProcessor
interface of CaptureManager
SDK in Unity Game Development SDK on Windows OSs. CaptureManager
SDK is included into the Unity Game Development SDK by plugin which allows capture game engine's rendered screen and record it into the output video file.
This demo program supports two mode of working:
First mode - capture and render image for checking functionality of the correct working of the capture processor:
Second mode - recording with the selection of encoder and output format file:
This demo program can be obtained from the following links:
This demo program presents the way of viewing and recording video and audio from external sources in Unity Game Development SDK on Windows OSs.
This demo program can be got from the following links:
This demo program presents the way of using of the Switcher node. This code uses two output nodes for getting images from the source: EVR and Sample Grabber. Sample Grabber takes sample with 1 second timing and checks average brightness in sample - if this value is more or equal threshold, then Switcher node is set into turn on and EVR gets samples with 30 fps. If this value is less threshold, then Switcher node is set into turn off and EVR gets nothing. This schema has the next view:
The result demo has the next view:
This demo program can be downloaded from the below links:
This demo program presents the way of using of the Switcher node. Switcher is set in recording stream and allows pause/resume this video stream while EVR preview stream still work in independent continue mode.
This demo program can be downloaded from the below links:
This demo program presents the way of using of the Switcher node for attaching and detaching of the recording stream to the source at runtime WITHOUT stopping of the source. ISwitcherControl
interface has commands for flexible changing configuration of the Switcher node.
Such solution can be useful for task of recording media files with the fixed size - while size of the current recording media file reaches the threshold value then the current file is detached and the new one is attached to the recording stream.
In this demo program, the Switcher nodes (for video and audio streams) are created WITHOUT recording media file - it is possible. The new recording media file is attached by clicking on "Start recording" button. The old one (if it exists) is detached.
This demo program can be downloaded from the links below:
This demo program presents the way of using of the Screen Capture Source. Some time ago, I needed screen recorder application and tried to develop a simple application which can cover almost 99% functionality of the other similar applications, but use only CaptureManager
SDK. It has a simple interface:
It has clipping by "elastic band rectangle":
It generates filename by current date:
It has configuration for showing and hiding cursor icon and back image mask:
It has configuration for video encoder with setting compression quality:
It has configuration selection output file format and selection directory for storing the recorded video files:
Current configurations can be saved and reset to the default values.
You can get this demo program from the following links:
These demo programs present the way of using of the DirectX9 texture shared handler for sharing of the rendering context between two processes. They include WPFInterProcessClient
and InterProcessRenderer
projects.
Direct3D9Ex
allows create DirectX9 texture with the shared handler:
private Direct3DSurface9 GetSharedSurface(Direct3DDevice9Ex device)
{
return device.CreateRenderTarget(
m_width,
m_height,
Format,
0,
0,
0
);
}
Each Direct3DSurface9
has Handle
property which can be used for the rendering target in another process.
Such solution allows to separate user interface process and capture-rendering process:
mOffScreenCaptureProcess = new Process();
mOffScreenCaptureProcess.StartInfo.FileName = "InterProcessRenderer.exe";
mOffScreenCaptureProcess.EnableRaisingEvents = true;
mOffScreenCaptureProcess.Exited += mOffScreenCaptureProcess_Exited;
mPipeProcessor = new PipeProcessor(
"Server",
"Client");
mPipeProcessor.MessageDelegateEvent += lPipeProcessor_MessageDelegateEvent;
mOffScreenCaptureProcess.StartInfo.Arguments =
"SymbolicLink=" + aSymbolicLink + " " +
"StreamIndex=" + aStreamIndex + " " +
"MediaTypeIndex=" + aMediaTypeIndex + " " +
"WindowHandler=" + mVideoPanel.Handle.ToInt64();
mOffScreenCaptureProcess.StartInfo.UseShellExecute = false;
try
{
mIsStarted = mOffScreenCaptureProcess.Start();
HeartBeatTimer.Start();
}
catch (Exception)
{
mIsStarted = false;
}
This demo programs can be got from the below links:
This demo program presents the way for grouping of the video and audio sources by unique device id. Each device in Windows OS has a unique device id for identifying each one. This demo program displays this id and groups devices according to this id.
This solution can be useful for task with many video and audio USB sources from one vender - this code allows group video and audio sources according to belong of the real USB device.
You can get this demo program from this link:
This demo program presents the way of using of the DirectX9 texture shared handler for rendering. The current generation of Windows OS has functionality for sharing of the rendering texture between different versions of DirectX.
You can get this demo program from the below links:
This demo program presents the way for using of CaptureManager
SDK with DirectShow
player as a Video Renderer in C++ project of Window application. This demo program uses DirectShow
for playing video in C++ code, and CaptureManager
's EVRMultiSink
output node:
HRESULT hr = S_OK;
IBaseFilter *pEVR = NULL;
unsigned long lPtrOutputNodeAmount = 0;
CaptureManagerLoader::getInstance().getMaxVideoRenderStreamCount
(&lPtrOutputNodeAmount);
std::vector<CComPtrCustom<IUnknown>> lOutputNodes;
CaptureManagerLoader::getInstance().createRendererOutputNodes(
hwnd,
nullptr,
1,
lOutputNodes);
if (!lOutputNodes.empty())
{
lOutputNodes[0]->QueryInterface(IID_PPV_ARGS(&pEVR));
}
CHECK_HR(hr = pGraph->AddFilter(pEVR, L"EVR"));
You can get the demo program from the below link:
This demo program presents the way for using of CaptureManager
SDK with DirectShow
player as a Video Renderer in WPF application. This demo program uses DirectShowLib
C# NuGet for playing video in C# code, and CaptureManager
's EVRMultiSink
output node:
List<object> lOutputNodesList = new List<object>();
lCaptureManagerEVRMultiSinkFactory.createOutputNodes(
IntPtr.Zero,
mEVRDisplay.Surface.texture,
1,
out lOutputNodesList);
if (lOutputNodesList.Count == 0)
return;
IBaseFilter lVideoMixingRenderer9 = (IBaseFilter)lOutputNodesList[0];
var h = m_pGraph.AddFilter(lVideoMixingRenderer9, "lVideoMixingRenderer9");
Version for DirectShow
has only software implementation of renderer.
This demo program can be got from the following links:
This demo program presents the way for using of CaptureManager
SDK in RTSP streaming service. This demo program is based on GitHub project SharpRTSP - C# version of a RTSP handling library for development of the RTSP communication server and client.
CaptureManager
SDK has functionality for getting raw data from the video and audio sources - demo program WPFWebViewerCallback presents the way for getting of RGB32 images for displaying. However, this SDK allows get any raw data from the SDK pipeline - ever of the encoded video and audio data. This demo program defines output node for video stream:
ISampleGrabberCallback lH264SampleGrabberCallback;
aISampleGrabberCallbackSinkFactory.createOutputNode(
MFMediaType_Video,
MFVideoFormat_H264,
out lH264SampleGrabberCallback);
object lOutputNode = lEVROutputNode;
if (lH264SampleGrabberCallback != null)
{
uint ltimestamp = 0;
lH264SampleGrabberCallback.mUpdateEvent += delegate
(byte[] aData, uint aLength)
{
if (s != null)
{
ThreadPool.QueueUserWorkItem((object state) =>
{
s.sendData(aIndexCount, ltimestamp, aData);
ltimestamp += 33;
});
}
};
and for audio stream:
ISampleGrabberCallback lAACSampleGrabberCallback;
aISampleGrabberCallbackSinkFactory.createOutputNode(
MFMediaType_Audio,
MFAudioFormat_AAC,
out lAACSampleGrabberCallback);
if (lAACSampleGrabberCallback != null)
{
lAACSampleGrabberCallback.mUpdateFullEvent += delegate
(uint aSampleFlags, long aSampleTime,
long aSampleDuration, byte[] aData, uint aLength)
{
if (s != null)
{
ThreadPool.QueueUserWorkItem((object state) =>
{
s.sendData(aIndexCount,
(uint)aSampleTime / 100000, aData);
});
}
};
Combination of this SDK with code of the SharpRTSP allows create workable version of the RTSP Video and Audio Streaming Server:
For supporting of the wide spread decoder, CaptureManager
allows encode video into the H264 and H265 (HEVC) video formats:
You can get the demo programs from the following links:
This demo program presents the way for using of CaptureManager
SDK in RTSP streaming service. This demo program is based on GitHub project SharpRTSP - C# version of a RTSP handling library for development of the RTSP communication server and client.
CaptureManager
SDK has functionality for capturing raw data from an external source. This source can be encoded data - in this example, the source is a test server with H264 video encoded stream:
You can get the demo programs from the following links:
This demo program presents the way for using of CaptureManager
SDK in RTMP (Real-Time Messaging Protocol (RTMP) was initially a proprietary protocol developed by Macromedia for streaming audio, video and data over the Internet, between a Flash player and a server. Macromedia is now owned by Adobe, which has released an incomplete version of the specification of the protocol for public use) streaming services (Facebook (RTMPS), YouTube and others). This demo program is based on librtmp
- C version of a RTMP handling library for development of the RTMP streaming client.
You can get the demo programs from the following links:
This demo program presents the way for using mixing functionality of CaptureManager
SDK. New SDK interfaces:
IMixerNodeFactory
IVideoMixerControl
IAudioMixerControl
allow to combine many video streams into the one video stream and many audio streams into the one audio stream. Interface IMixerNodeFactory
creates a list of topology input nodes with the one output node. Interface IVideoMixerControl
allows control video mixing of each source:
setPosition
- set position of the input video stream on the back reference video stream setSrcPosition
- set region in input video stream setZOrder
- set Z order of mixer setOpacity
- set opacity of input video stream flush
- clear video input stream buffer
public interface IVideoMixerControl
{
bool setPosition(object aPtrVideoMixerNode,
float aLeft,
float aRight,
float aTop,
float aBottom);
bool setSrcPosition(object aPtrVideoMixerNode,
float aLeft,
float aRight,
float aTop,
float aBottom);
bool setZOrder(object aPtrVideoMixerNode,
UInt32 aZOrder);
bool setOpacity(object aPtrVideoMixerNode,
float aOpacity);
bool flush(object aPtrVideoMixerNode);
}
Interface IAudioMixerControl
allows control mixing audio stream with the reference back audio stream: setRelativeVolume
- set volume of the audio stream to relative for reference back audio stream = (input audio stream * aRelativeVolume) + (reference back audio stream * (1.0 - aRelativeVolume))
.
public interface IAudioMixerControl
{
bool setRelativeVolume(object aPtrAudioMixerNode, float aRelativeVolume);
}
You can get the demo programs from the following links:
This demo program presents the way for recording animated gif images into the video file by CaptureManager
SDK. For injection decoded GIF frames demo program uses implemented interface ICaptureProcessor
.
This demo program can be downloaded from the below links:
This demo program presents the way for recording output from system audio mixed into the audio file by CaptureManager
SDK. Audio data is a type of data which needs low latency for correct recording - CaptureManager
SDK has optimized media processing pipeline and can capture audio data with high quality.
You can get this demo program from the link below:
This demo program presents the way for playing audio stream from webcamera into the system audio renderer by CaptureManager
SDK.
This demo program can be downloaded from the below links:
This demo program presents the way for creating DirectShow
filter of "Virtual Camera" - special software media component of Windows OS which is recognized by webcam applications (like Skype or Zoom) as a real webcamera device. Such solution allows to integrate C# code of video processing by CaptureManager
SDK into the video processing pipeline of 3rd party webcam applications.
However, this demo program shows a more complex product - this demo is divided into two projects: C# DirectShow filter and WPF/C# Out-of-process COM Server. This magic is possible by using of Windows OS cross-process rendering technique of DirectX 11.
After compiling this demo, there are two base files in Bin folder:
- WPFVirtualCameraServer.exe - Out-of-process COM Server
- VirtualCameraDShowFilter.dll - In-process
DirectShow
filter
For installing this demo into the target Windows OS, it needs to execute install.bat script file.
For uninstalling this demo from the Windows OS, it needs to execute uninstall.bat script file.
So, because DirectShow
filter is part of Windows OS COM infrastructure, there is a need to generate and register TLB
(type library) files - VirtualCameraDShowFilter.tlb, WPFVirtualCameraServer.tlb.
This generation and registration are executed in script install.bat. After executing install.bat, this software emulation of video camera will be displayed as "Virtual Camera
" in list of accessible web cameras:
At the same moment, Windows OS launches background process WPFVirtualCameraServer
for processing and rendering video:
How does it work?! While webcam application recognizes VirtualCameraDShowFilter.dll as a real video source, this DLL calls Windows OS to create one instance of COM VirtualCameraServer
by ProgID="WPFVirtualCameraServer.VirtualCameraServer"
. Because COM VirtualCameraServer
is registered as a LocalServer32
, the target Windows OS will create process WPFVirtualCameraServer
with automatic cross-process marshaling data between WPFVirtualCameraServer
process and process of webcam application.
VirtualCameraDShowFilter.dll will communicate with WPFVirtualCameraServer
process via IVirtualCameraServer
interface which has only one method - get_DirectX11TextureHandler
:
[ComVisible(false)]
[Guid("EEE2F595-722F-4279-B919-313189A72C36")]
[InterfaceType(ComInterfaceType.InterfaceIsDual)]
public interface IVirtualCameraServer
{
IntPtr get_DirectX11TextureHandler(out int retVal);
}
Method get_DirectX11TextureHandler
returns DirectX11 Texture Shared Handler which allows to share DirectX 11 Texture between processes (you can get more information at ID3D11Device::OpenSharedResource).
WPFVirtualCameraServer
process activity is presented in taskbar as icon . Double click Left Button of mouse pointer on this icon shows WPF UI for controlling cross-process rendering - position and size of rendering:
One click Right Button of mouse pointer on icon of WPFVirtualCameraServer
process shows context menu of choosing video sources - default video source is Screen Capture
.
This solution allows to duplicate one real video source to many webcam applications:
Schema of process interacting timeline shows lifecycle of WPFVirtualCameraServer
process relating to 3rd party webcam applications:
This demo program can be obtained from here .
This demo program presents the way for log4net framework into the project with CaptureManager
SDK. For implementing such functionality, the new interface ILogPrintOutCallback
has been added.
[
object,
uuid(251E71F6-8C02-475F-B300-216D560426B2),
oleautomation,
helpstring("Interface for processing callback of log.")
]
interface ILogPrintOutCallback : IUnknown
{
[
helpstring("Method for callback invoking.")
]
HRESULT invoke(
[in] DWORD aLevelType,
[in] BSTR aPtrLogString);
};
In project, this new interface in implemented in the following way:
LogManager.getInstance().WriteLogDelegateEvent += (aLogLevel, aPtrLogString)=>
{
switch (aLogLevel)
{
case CaptureManagerToCSharpProxy.WrapClasses.LogLevel.INFO_LEVEL:
log.Info(aPtrLogString);
break;
case CaptureManagerToCSharpProxy.WrapClasses.LogLevel.ERROR_LEVEL:
log.Error(aPtrLogString);
break;
default:
break;
}
};
You can get this demo program from the following links:
This demo program presents the way to add processing event of pressing the hardware button of web cam into the project with CaptureManager
SDK. This done by extending of SessionCallbackEventCode
with SnapTrigger
. Code can be by the next view:
void UpdateStateDelegate(uint aCallbackEventCode, uint aSessionDescriptor)
{
SessionCallbackEventCode k = (SessionCallbackEventCode)aCallbackEventCode;
switch (k)
{
case SessionCallbackEventCode.Unknown:
break;
case SessionCallbackEventCode.Error:
break;
case SessionCallbackEventCode.Status_Error:
break;
case SessionCallbackEventCode.Execution_Error:
break;
case SessionCallbackEventCode.ItIsReadyToStart:
break;
case SessionCallbackEventCode.ItIsStarted:
break;
case SessionCallbackEventCode.ItIsPaused:
break;
case SessionCallbackEventCode.ItIsStopped:
break;
case SessionCallbackEventCode.ItIsEnded:
break;
case SessionCallbackEventCode.ItIsClosed:
break;
case SessionCallbackEventCode.VideoCaptureDeviceRemoved:
{
Dispatcher.Invoke(
DispatcherPriority.Normal,
new Action(() => mLaunchButton_Click(null, null)));
}
break;
case SessionCallbackEventCode.SnapTrigger:
MessageBox.Show("Hardware button is pressed!!!");
break;
default:
break;
}
}
You can get this demo program from the following links:
Capture Manager SDK v1.0.0 – open release. I include interfaces with the stable definition:
via class CoLogPrintOut
– interface ILogPrintOut
via class CoCaptureManager
– interface ICaptureManagerControl
:
method createControl
– creates the main control objects with interfaces:
ISourceControl
, ISinkColtrol
, IEncoderControl
, ISessionControl
, IStreamControl
;
method createMisc
– creates a miscellaneous object with interfaces:
IMediaTypeParser
IStrideForBitmap
IVersionControl
via object with interface ISourceControl
:
via object with interface ISinkControl
:
IFileSinkFactory
ISampleGrabberCallSinkFactory
IEVRSinkFactory
ISampleGrabberCallbackSinkFactory
IByteStreamSinkFactory
via object with interface ISession
:
via object with interface ISampleGrabberCallSinkFactory
:
via object with interface ISampleGrabberCallbackSinkFactory
:
These interfaces do the below work:
ILogPrintOut
– it manages writing of log information ICaptureManagerControl
– it manages creating of all capturing controls and miscellaneous objects ISourceControl
– it manages and creates sources of video and audio signals ISinkColtrol
– it manages and creates sinks of video and audio streams IEncoderControl
– it manages and creates video and audio encoders ISessionControl
– it creates object for management of recording session IStreamControl
– it creates object for controlling of media streams IMediaTypeParser
– it creates text representation of media type in XML format IStrideForBitmap
– it computes stride of memory for the specific bitmap format IVersionControl
– it manages information about current version of CaptureManager
IWebCamControl
– it manages options of web camera IFileSinkFactory
– it creates media sink nodes which are linked with media file ISampleGrabberCallSinkFactory
– it creates media sink for grabbing of one sample by direct calling object with interface ISessionCallback
ISampleGrabberCall
– it manages grabbing of one sample ISampleGrabberCallbackSinkFactory
– it creates media sink for grabbing of one sample by calling object with interface ISampleGrabberCallback
from CaptureManager
inner thread ISampleGrabberCallback
– it manages grabbing of one sample from CaptureManager
inner thread IEVRSinkFactory
– it creates media sink for rendering of video stream IByteStreamSinkFactory
– it creates media sink nodes which are linked with customised byte stream object with interface IMFByteStream
ISession
– it manages the recording session ISessionCallback
– it manages the result state of current recording session from CaptureManager
inner thread
The definitions of these interfaces are presented in SDK files and in CaptureManagerToCSharpProxy
project. For effective marshalling information via border of unmanaged-managed code, CaptureManager
uses eight simple XML documents with the below structures:
These documents contain information about sources, encoders, sinks, stream controls. Most types and GUID are taken from Microsoft Media Foundation MSDN: Microsoft Media Foundation, but there are some rules for definition of nodes in these XML documents:
SingleValue
- node for presenting only one value - name, integer, type RatioValue
- node for presenting of value in float
point format with Numerator and Denominator Value.ValueParts
- node for storing collection of ValuePart
nodes with the same type. Examples of parsing of these XML documents can be found in code of WPF examples.
CaptureManager
SDK v1.1.0 – open release. This release includes the following changes:
- Add supporting of automating registration
CaptureManager
as COM server with Type Library - Add supporting of dynamic language Python 2.7
- Add supporting of HEVC(H265) encoder in Windows 10: Download HEVCEncoder_Windows10
CaptureManager
SDK v1.2.0 – beta release. This release includes the following changes:
- Deleted old functionality of working with
CaptureManager
via library linking. Old demo programs are moved in old demos. - Lazy binding of Microsoft Media Foundation functions.
- Replaced web camera properties functionality from DirectShow on
DeviceIoControl
. - Added resizing in EVR.
Lazy binding of functions is implemented by loading of Microsoft Media Foundation libraries at runtime. Functions which cannot be found in Microsoft Media Foundation libraries are replaced by stub functions. It allows prevent crushing of applications by using of unsupported functions of Microsoft Media Foundation libraries.
Using of DeviceIoControl
for changing of web camera properties allows to resolve a problem with delay of web camera driver initialization. It allows to expand a workable set of web camera properties by: "Amount of digital zoom", "Upper limit for the amount of digital zoom", "Power line frequency".
Resizing in EVR resolves problem with unchanging image size of rendered video. The new implementation controls current image size of renderer GUI and changes image size and position for keeping proportion of video.
CaptureManager
SDK v1.2.0 – release. This release includes the following changes:
- Deleted old functionality of working with
CaptureManager
via library linking. Old demo programs are moved in old demos. - Lazy binding of Microsoft Media Foundation functions.
- Replaced web camera properties functionality from
DirectShow
on DeviceIoControl
. - Added resizing in EVR.
- Added
PULL
mode in SampleGrabberCallSinkFactory
- the new mode which allows to take a single sample. - Added sample accumulator nodes for storing of 5 or 10 last samples in media stream.
Lazy binding of functions is implemented by loading of Microsoft Media Foundation libraries at runtime. Functions which cannot be found in Microsoft Media Foundation libraries are replaced by stub functions. It allows prevent crushing of applications by using of unsupported functions of Microsoft Media Foundation libraries.
Using of DeviceIoControl
for changing of web camera properties allows resolve problem with delay of web camera driver initialization. It allows expand workable set of web camera properties by "Amount of digital zoom", "Upper limit for the amount of digital zoom", "Power line frequency".
Resizing in EVR resolves problem with unchanging image size of rendered video. The new implementation controls current image size of renderer GUI and changes image size and position for keeping proportion of video.
CaptureManager
SDK v1.3.0 – beta. This release includes the following changes:
- Add supporting of video capturing via DirectShow Crossbar technique for the next inputs:
- Composite
- SVideo
- USB
- 1394 (FireWire)
On MSDN - Audio/Video Capture in Media Foundation, you can find that "Video capture devices are supported through the UVC class driver and must be compatible with UVC 1.1". It means that Microsoft Media Foundation can capture video only from devices with supporting of UVC driver - USB Video Class driver. It is usually USB web cameras - other types of video capture devices are not supported. From my experience, I can say that the target platform of Microsoft Media Foundation is WindowsStore applications. WindowsStore applications are originally targeted on portable devices like "Pad" - "Surface" - such devices DO NOT have external ports for connecting of capturing cards. They can work only with embedded web cameras which are connected via inner USB ports. I think that it is a correct idea, but I have got many questions about using of CaptureManager
with capture cards which support DirectShow Crossbar and answered that it is impossible. However, it is not the full answer - the full answer is that it is possible to see capture card which supports DirectShow Crossbar via Microsoft Media Foundation functionality, but captured video is not accessible. After some research, I found the way to get access for such capture cards. Info for working with capture card can be selected from XML string of sources via checking of attributes nodes of Source with name 'MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE_VIDCAP_CATEGORY
': "Source.Attributes/Attribute[@Name='MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE_VIDCAP_CATEGORY']
/SingleValue[@Value='CLSID_VideoInputDeviceCategory']
" - Microsoft Media Foundation defines CLSID_WebcamInterfaceDeviceCategory - {E5323777-F976-4F5B-9B55-B94699C46E44}
for device with UVC supporting and CLSID_VideoInputDeviceCategory - {860BB310-5D01-11D0-BD3B-00A0C911CE86}
for DirectShow
capture devices. It allows easy select only DirectShow
video capture devices. Input connection of DirectShow
according to physical type can be selected by choosing of the suitable media type - each media type node has child MediaTypeItem
node with name DirectShowPhysicalType - {09FCAE6B-8F00-4C0A-BBCA-C6820B576C99}
. It allows to group the meda types according physical type of input connection.
<MediaTypeItem Name="DirectShowPhysicalType"
GUID="{09FCAE6B-8F00-4C0A-BBCA-C6820B576C99}"
Title="Physical type of input pin of Crossbar."
Description="Physical type of input pin of Crossbar.">
<SingleValue Value="PhysConn_Video_Composite" />
</MediaTypeItem>
CaptureManager
SDK v1.3.0. This release includes the following changes:
- Add supporting of video capturing via
DirectShow
Crossbar technique for the below inputs:
- Composite
- SVideo
- USB
- 1394 (FireWire)
- COM
ThreadingModel
has been changed from Apartment
on Both
.
CaptureManager
SDK v1.4.0 – beta. This release includes the following changes:
- Add supporting for changing of cursor presenting in Screen Capture source: make cursor invisible and add drawing additional image for current cursor's image:
This functionality is got by adding " --options="
into the symbolic link. Options is set by XML format:
<Options>
<Option Type="Cursor" Visiblity="True">
<Option.Extensions>
<Extension Fill="0x7055ff55" Height="100" Type="BackImage" Width="100"/>
</Option.Extensions>
</Option>
</Options>
Selection of cursor option is done by "Type='Cursor'"
attribute. Attribute "Visiblity='True'"
makes current type - Cursor
visible, and attribute "Visiblity='False'"
sets current type - Cursor
invisible - it allows capture screen without drawing of cursor. XML node "Option.Extensions"
allows add extensions. At the present time, Screen capture source supports the next extexnsion:
"<Extension Type='BackImage' Height='100' Width='100' Fill='0x7055ff55' />"
Attribute "Type='BackImage'"
defines drawing before cursor's image 'marker' image:
Size of 'BackImage
' is defined by attributes "Height='100' Width='100'"
, color is defined by attribute "Fill='0x7055ff55'"
with ARGB format. In code, it can be implemented in this way:
string lextendSymbolicLink = lSymbolicLink + " --options=" +
"<?xml version='1.0' encoding='UTF-8'?>" +
"<Options>" +
"<Option Type='Cursor' Visiblity='True'>" +
"<Option.Extensions>" +
"<Extension Fill='0x7055ff55' Height='100'
Type='BackImage' Width='100'>" +
"</Extension> +
</Option.Extensions>" +
"</Option>" +
"</Options>";
CaptureManager
SDK v1.4.0. This release includes the following changes:
- Added supporting for changing of cursor presenting in Screen Capture source: made cursor invisible and added drawing additional image for current cursor's image.
- Added supporting for using of custom Microsoft Media Foundation Transform for implementation custom effects (for example - WaterMarkInjectorDemo, TextInjectorDemo).
CaptureManager
SDK v1.5.0. This release includes the following changes:
- Add supporting of multidisplay outputs on:
- Add supporting of
DirectShow
software filters.
Capture Manager SDK v1.6.0. This release includes the below changes:
- Add
IEVRMultiSinkFactory,
IEVRStreamControl
interfaces: IEVRMultiSinkFactory
allows create two separated Topology Nodes for the one rendering context - HWND
of Window, Direct3DSurface9
or IDXGISwapChain.
IEVRStreamControl
allows control of rendering result - position rendering, Z-order rendering, flushing of stream buffer, filters and features. IInitilaizeCaptureSource, ICurrentMediaType, ISourceRequestResult, ICaptureProcessor
interfaces: IInitilaizeCaptureSource
receives configuration of Capture Processsor. ICurrentMediaType
contains info of the selected Media Type. ISourceRequestResult
receives raw data in format which is defined in IInitialazeCaptureSource.
ICaptureProcessor
controls the capturing process.
CaptureManager
has implementation of multisink rendering via sharing context. This solution is different from common implementation of Microsoft Media Foundation. Original solution uses 'Session
' pattern - working with sources, decoders, encoders and sinks in the one session. As a result, it leads to synchronization of many stream pipelines from the one session and they can be controlled by one command `Start`, `Stop`, `Pause`. It works fine in case of playing of one video file. Microsoft Media Foundation has function for creating one source from many others - MFCreateAggregateSource
- it allows to implement effect `Picture in Picture` for rendering two and more video files. However, it has limitations - it is workable only in context of the one session - it means that all sources of the session started, stopped and paused at the same time.
It can be OK for playing of video from files, but what to do in the situation of sources with different latency - for two sources - one from local webcam and another from IP camera by HTTP. Another problem - if we want to stop only one source, the code stops ALL sources in the session. The solution of these problems can be using of DIFFERENT INDEPENDENT sessions for each source, but common implementation of Microsoft Media Foundation DOES NOT allow to share video rendering sink via many sessions. CaptureManager
resolves such problems by adding one level abstraction - while common implementation of Microsoft Media Foundation creates ONE Video Rendering Media Sink from the one video context, CaptureManager
creates Many Video Rendering Media Sinks from Controller which shares the one video context.
Additional level allows manage rendering more flexible. Demo program for testing of such functionality can be found from WPFMultiSourceViewer. CaptureManager
Freeware supports maximum 2 rendering sinks from the one rendering context. MultiRendering
presented in `Sink Factories` by new `Sink Factory` node:
<sinkfactory guid="{10E52132-A73F-4A9E-A91B-FE18C91D6837}"
name="EVRMultiSinkFactory" title="Enhanced Video Renderer multi sink factory">
<value.valueparts>
<valuepart description="Default EVR implementation"
guid="{E926E7A7-7DD0-4B15-88D7-413704AF865F}"
maxportcount="2" mime="" title="Container format" value="Default">
</valuepart>
</value.valueparts>
</sinkfactory>
Rendering can be controlled by IEVRStreamControl
interface:
setPosition
method allows positioning of rendering on the shared rendering context in relative values from 0.0f to 1.0f. getPosition
method allows get positioning of rendering on the shared rendering context in relative values from 0.0f to 1.0f. setZOrder
method allows set rendering order of stream from 0 to max rendering sinks. getZOrder
method allows get rendering order of stream from 0 to max rendering sinks. flush
method allows clear rendering buffer of the rendering sink. setSrcPosition
method allows set rectangle in the input stream for selection of Region Of Interest. getSrcPosition
method allows get rectangle in the input stream for selection of Region Of Interest. getCollectionOfFilters
method allows get XML string with list of filters for current rendering sink.
<filters>
<filter currentvalue="" max="" min="" step="" title="">
</filter></filters>
setFilterParametr
method allows set filter for current rendering sink. getCollectionOfOutputFeatures
method allows get XML string
with list of output features for rendering context:
<features>
<feature title="Background Color">
<color>
<channel currentvalue="0" index="1" max="255" min="0" step="1" title="Red">
<channel currentvalue="0" index="2" max="255" min="0" step="1" title="Green">
<channel currentvalue="0" index="3" max="255" min="0" step="1" title="Blue">
</channel></channel></channel></color>
</feature>
</features>
setOutputFeatureParametr
method allows set output feature for rendering context;
CaptureManager
has functionality for capturing data from custom Capture Processor. It includes ICaptureProcessor
interface for implementing custom CaptureProcessor
class. The needed configuration is set in argument of initialize
method with IInitilaizeCaptureSource
interface. The selected configuration is set in argument of setCurrentMediaType
method with ICurrentMediaType
interface. The needed data is requested in sourceRequest
method by argument with ISourceRequestResult
interface. It allows to develop solutions for capturing of raw data WPFImageViewer or WPFIPCameraMJPEGViewer.
CaptureManager
SDK v1.7.0. This release includes the following change:
- Added
CaptureManager
Video Renderer Factory which is compatible with MediaFoundation
player. It supports IEVRMultiSinkFactory,
IEVRStreamControl
interfaces via additional sink factory:
<SinkFactory Name="CaptureManagerVRMultiSinkFactory"
GUID="{A2224D8D-C3C1-4593-8AC9-C0FCF318FF05}"
Title="CaptureManager Video Renderer multi sink factory">
<Value.ValueParts>
<ValuePart Title="Container format" Value="Default"
MIME="" Description="Default EVR implementation"
MaxPortCount="2" GUID="{E926E7A7-7DD0-4B15-88D7-413704AF865F}" />
</Value.ValueParts>
</SinkFactory>
Add the new demo programs listed below:
CaptureManager
Video Renderer Factory can be used for implementing of video sink for rendering video from media file in many solutions.
CaptureManager
SDK v1.8.0. This release includes the following changes:
- Added "normalize" mode for the "Screen Capture" source
- Added
IRenderingControl
interface for controlling of rendering on target process - Added "Shape" attribute for "Cursor" option of "Screen Capture" source
- Added "Clip" type option of "Screen Capture" source
Add the new demo programs listed below:
The "Screen Capture" source supports capture of screen as-is - it means that the rotated image is got as it is created by computer - rotated. However, in some situations, it may need to get image in some specific orientation of image, for example, get "Portrait" orientated screen in "Landscape" form. For this purpose, the "Screen Capture
" source supports mode "normalize
" for post processing of the current image to the needed orientation: " --normalize=Landscape
".
The "Screen Capture" source supports additional shape of the back image - "Ellipse", and the new option "Clip":
The new IRenderingControl
interface is created from ICaptureManagerControl::createMisc
method and expands controlling of the rendering process. IRenderingControl
interface has the following signature:
MIDL_INTERFACE("1CBCAF1C-1809-41DE-A728-23DBF86A6170")
IRenderingControl : public IDispatch
{
public:
virtual HRESULT STDMETHODCALLTYPE enableInnerRendering(
IUnknown *aPtrEVROutputNode,
BOOL aIsInnerRendering) = 0;
virtual HRESULT STDMETHODCALLTYPE renderToTarget(
IUnknown *aPtrEVROutputNode,
IUnknown *aPtrRenderTarget,
BOOL aCopyMode) = 0;
};
Method enableInnerRendering
allows switch rendering model in the CaptureManager
. Method renderToTarget
allows execute rendering to the target texture from user rendering thread.
The reason to include a new interface for controlling of rendering is cased by thread-safe of DirectX. In the previous versions of CaptureManager
rendering executed in the context of the inner thread of CaptureManager
. In most situations, such a way for rendering does not lead to problems - in the demo programs the DirectX9 device is created with support of multithreading and DirectX11device is thread-safe by default. However, I was faced with the problem of rendering while have started modify UnityWebCamViewer demo project for including of DirectX11 rendering. After some search in code, I found that ID3D11Device::CreateDeferredContext
return error DXGI_ERROR_INVALID_CALL
- according to this link - "if the device was created with the D3D11_CREATE_DEVICE_SINGLETHREADED value, CreateDeferredContext
returns DXGI_ERROR_INVALID_CALL
." It means that Unity engine does not support updating texture from none rendering thread. IRenderingControl
interface allows switch from multithreading rendering to single thread rendering mode. By default, CaptureManager
has multithreading rendering which can be switched to single thread rendering by method enableInnerRendering
with FALSE
state of aIsInnerRendering
. Method renderToTarget
allows render to target in context of the single rendering thead - for target can be used rendering texture with interface IDirect3DTexture9
for DirectX9
or with interface ID3D11Texture2D
for DirectX11
. Argument aCopyMode
must be set to FALSE
for DirectX9
. For DirectX11
, argument aCopyMode
can be TRUE
or FALSE
- it depends from type of rendering texture. DirectX11
allows create TYPELESS
texture formats for flexibility converting of types. However, DirectX11
Video processor supports only UNORM
types of rendering texture. Unity engine creates rendering texture with TYPELESS
texture formats - it leads creating of errors. Argument aCopyMode
with TRUE
allows create additional texture with UNORM
type and use this texture for target of rendering and copy from this texture to the original rendering texture with TYPELESS
texture format. This operation is executed in the context of GPU very fast.
There is a CaptureManager
help file which you can download from here.
CaptureManager
SDK v1.9.0. This release includes the following changes:
- Add "
HWND
" mode for the "Screen Capture" source.
Add the new demo programs:
CaptureManager
SDK v1.10.0. This release includes the following changes:
- Added
ISwitcherNodeFactory
interface for pause/resume of the media stream - Added
ISwitcherControl
interface for controlling (pause, resume, detach, attach) media streams
Add the new demo programs:
CaptureManager
SDK v1.11.0. This release includes the following changes:
- Added DirectX11 Texture Capture
- Changed Screen Capture output format from RGB32 to NV12
CaptureManager
SDK v1.12.0. This release includes the following changes:
- Added
CM_DEVICE_LINK
attribute with unique device id - Supports
DirectX9Ex
texture shared handler
Add the new demo programs listed below:
CaptureManager
SDK v1.13.0. This release includes the following changes:
- Added supporting of
DirectShow
Video Renderer
Added the new demo programs listed below:
CaptureManager
SDK v1.14.0. This release includes the following changes:
- Added H.264/MPEG-4 AVC video stream encoder (from 256 kbit/c to 64 kbit/c)
- Added H.265/MPEG-4 HEVC video stream encoder (64 kbit/c)
- Added AAC audio stream encoder (96 kbit/c)
- Added video stream decoder
Add the following new demo programs:
CaptureManager
SDK v1.15.0. This release includes the following changes:
- Add fast SSE2/SSSE3/SSE4.2 copy
CaptureManager
SDK v1.16.0. This release includes the following changes:
- Added interfaces for mixing video and audio streams:
IMixerNodeFactory
IVideoMixerControl
IAudioMixerControl
Added the new demo programs listed:
CaptureManager
SDK v1.17.0. This release includes the following changes:
- Expanded H264 and H265 video encoder bitrate upto 2^25 (33,554,432)
Capture Manager SDK v1.18.0. This release includes the following changes:
Add the new demo program:
Capture Manager SDK v1.19.0. This release includes the following changes:
- Added supporting of H264 webcam
CaptureManager
SDK v1.19.1. This release includes the below changes:
- Fixed bug with pull mode
- Fixed bug with open camera already in use
CaptureManager
SDK v1.20.0. This release includes the following change:
- Added async/await invoking model
CaptureManager
SDK v1.21.0. This release includes the following change:
- Replace
DirectX9
rendering on DirectX11
rendering.
Added the new demo program:
CaptureManager
SDK v1.22.0. This release includes the following changes:
- Add the new interface
ILogPrintOutCallback
.
Add the new demo program:
CaptureManager
SDK v1.23.0. This release includes the following changes:
- Improved Virtual Machine VMware support.
CaptureManager
SDK v1.24.0. This release includes the following changes:
- Add hardware button trigger event support.
Add the new demo program:
Points of Interest
Previously, I wrote that there was one unusual task which was the reason to start development of CaptureManager
. The task included recording live-video of industrial process from two sources and recording of data from two sensors. I have started to write a new solution for Microsoft Media Foundation and thought of using Microsoft ASF format due to the existence of MFMediaType_Binary
Major Type. However, after some time, I found that implementation of Microsoft ASF format for Microsoft Media Foundation for recording supports only MFMediaType_Video
, MFMediaType_Audio
, and MFMediaType_Script
. It was the main reason for stopping to resolve that task.
Updates
- improved quality of the
AudioLoopback
capture; - resolved problem with synchronization of the Screen Capture Source -
GDIScreenCapture
and AudioLoopback
; - added new Media Types for Screen Capture Source -
GDIScreenCapture
with the next frame rates: 20 fps, 25 fps and 30 fps; - added into Screen Capture Source -
GDIScreenCapture
supporting for capturing of cursor and drawing it into the captured video.
- fixed losing of samples in the
AudioLoopback
capture; - added new Media Types for Screen Capture Source -
GDIScreenCapture
with the next frame rates: 1 fps, 5 fps and 10 fps; - added supporting of WM Speech Encoder DMO;
- added supporting of WMVideo9 Screen Encoder MFT - FOURCC "MSS2";
- added version of
CaptureManager
SDK for x64 Windows OS.
- added new Screen Capture Source -
DirectX9ScreenCapture
with the next frame rates: 1 fps, 5 fps, 10 fps, 15fps, 20 fps, 25 fps, 30 fps; - added library CaptureManagerProxy.dll for calling
CaptureManagerSDK
descriptors in "C" style; - added supporting of 3rdParty MediaSink solutions - RecordInto3rdPartyMediaSink;
- added demo program for capturing of screen in DirectX9 video games - ScreenCaptureWithDirect3D9APIHooks;
- added limited supporting of COM Server;
- added CaptureManager.tlb;
- added demo programs for presenting of working with
CaptureManagerSDK
via COM on C#-WPF:
- first stable release 1.0.0;
- implemented supporting of all SDK functionality in COM Server;
- wrote new COM Server interface;
- stopped development of C interface;
- replaced
GDIScreenCapture
and DirectX9ScreenCapture
on the Screen Capture; - included full support of play, pause, stop and close functionality;
- updated code of demo programs for presenting of working with
CaptureManagerSDK
via COM on C#-WPF:
- added new demo program for demonstration of functionality of recording and network streaming media content:
- added demo program for working with
CaptureManager
SDK in Qt framework programs on MinGW compiler: QtMinGWDemo
- added demo program for working with
CaptureManager
SDK in Windows Store application.
Stable release 1.1.0:
- added supporting of automating registration
CaptureManager
as COM server with Type Library; - added supporting of dynamic language Python 2.7;
- added supporting of HEVC(H265) encoder in Windows 10: Download HEVCEncoder_Windows10;
Beta release 1.2.0:
- deleted old functionality of working with
CaptureManager
via library linking. Old demo programs are moved in old demos. - did lazy binding of Microsoft Media Foundation functions;
- replaced web camera properties functionality from DirectShow on
DeviceIoControl
; - added resizing in EVR.
Release 1.2.0:
- deleted old functionality of working with
CaptureManager
via library linking. Old demo programs are moved in old demos. - did lazy binding of Microsoft Media Foundation functions;
- replaced web camera properties functionality from
DirectShow
on DeviceIoControl
; - added resizing in EVR.
- added
PULL
mode in SampleGrabberCallSinkFactory
- the new mode which allows to take a single sample; - added sample accumulator nodes for storing of 5 or 10 last samples in media stream;
- added four new demo programs:
Beta release 1.3.0:
- added supporting of video capturing via DirectShow Crossbar technique for the next inputs:
- Composite
- SVideo
- USB
- 1394 (FireWire)
Added two new demo programs for Java programing language:
- added NuGet CaptureManager SDK package - package for NET 4.0 projects. It is based on
CaptureManagerToCSharpProxy
, but there is a one difference - package includes CaptureManager.dll as embedded resource. It allows unpack CaptureManager.dll into the Temp folder at Runtime and upload it into the application process. It is simpler than register CaptureManager.dll as COM server module in system - it does not need admin privileges. This package supports both CaptureManager.dll for x86 and x64 arch CPU and select the suitable at Runtime.
Release 1.3.0. This release includes the below changes:
- added supporting of video capturing via
DirectShow
Crossbar technique for the next inputs:
- Composite
- SVideo
- USB
- 1394 (FireWire)
- COM
ThreadingModel
has been changed from Apartment
on Both
.
Added the new demo program for on WindowsFroms
GDI with supporting of source type selecting:
Beta release 1.4.0 - Version 1.4.0 Beta:
- Added supporting for changing of cursor presenting in Screen Capture source: made cursor invisible and added drawing additional image for current cursor's image.
Added the new demo programs:
Release 1.4.0 - Version 1.4.0. This release includes the following changes:
- added support for changing of cursor presenting in Screen Capture source: made cursor invisible and added drawing additional image for current cursor's image.
- added supporting for using of custom Microsoft Media Foundation Transform for implementation custom effects.
Download CaptureManager SDK v1.4.0
Release 1.5.0 - Version 1.5.0. This release includes the below changes:
- added supporting of multidisplay outputs;
- added supporting of DirectShow software filters.
Added the new demo programs:
Release 1.6.0 - Version 1.6.0. This release includes the following changes:
- Added
IEVRMultiSinkFactory,
IEVRStreamControl
interfaces: IEVRMultiSinkFactory
allows create two separated Topology Nodes for the one rendering context - HWND
of Window, Direct3DSurface9
or IDXGISwapChain.
IEVRStreamControl
allows control of rendering - position rendering, Z-order rendering, flushing of stream buffer, filters and features. IInitilaizeCaptureSource, ICurrentMediaType, ISourceRequestResult, ICaptureProcessor
interfaces: IInitilaizeCaptureSource
receives configuration of Capture Processsor. ICurrentMediaType
contains info of the selected Media Type. ISourceRequestResult
receives raw data in format which is defined in IInitilaizeCaptureSource.
ICaptureProcessor
controls the capturing process.
Added the new demo programs listed below:
Release 1.7.0 - Version 1.7.0. This release includes the following changes:
- Added
CaptureManager
Video Renderer Factory which is compatible with MediaFoundation
player.
Download CaptureManager SDK v1.7.0
Added the new demo programs:
Added the new demo programs:
Released 1.8.0 - Version 1.8.0. This release includes the below changes:
- added "normalize" mode for the "Screen Capture" source;
- added
IRenderingControl
interface for controlling of rendering on target process; - added "Shape" attribute for "Cursor" option of "Screen Capture" source;
- added "Clip" type option of "Screen Capture" source.
Added the new demo programs:
Release 1.9.0 - Version 1.9.0. This release includes the following changes:
- added "
HWND
" mode for the "Screen Capture" source
Added the new demo programs:
Added the new demo programs:
Release 1.10.0 - Version 1.10.0. This release includes the below changes:
- added
ISwitcherNodeFactory
interface for pause/resume of the media stream; - added
ISwitcherControl
interface for controlling (pause, resume, detach, attach) media streams.
Added the new demo programs:
Release 1.11.0 - Version 1.11.0. This release includes the below changes:
- added DirectX11 Texture Capture
- changed Screen Capture output format from RGB32 to NV12
Release 1.12.0 - Version 1.12.0. This release includes the below changes:
- added
CM_DEVICE_LINK
attribute with unique device id; - supported
DirectX9Ex
texture shared handler.
Added the new demo programs:
Release 1.13.0 - Version 1.13.0. This release includes the following changes:
- added supporting of
DirectShow
Video Renderer.
Add the new demo programs:
Release 1.14.0 - Version 1.14.0. This release includes the following changes:
- added H.264/MPEG-4 AVC video stream encoder (from 256 kbit/c to 64 kbit/c)
- added H.265/MPEG-4 HEVC video stream encoder (64 kbit/c)
- added AAC audio stream encoder (96 kbit/c)
- added video stream decoder
Added the new demo programs:
Release 1.15.0 - Version 1.15.0. This release includes the following changes:
- added fast SSE2/SSSE3/SSE4.2 copy.
Release 1.16.0 - Version 1.16.0. This release includes the following changes:
- added interfaces for mixering video and audio streams:
IMixerNodeFactory
IVideoMixerControl
IAudioMixerControl
Added the new demo programs:
- published source code of
CaptureManager
SDK under open source license. The source code can be downloaded from this link.
Release 1.17.0 - Version 1.17.0. This release includes the following change:
- expanded H264 and H265 video encoder bitrate upto 2^25 (33,554,432)
Release 1.18.0 - Version 1.18.0. This release includes the following change:
Release 1.19.0 - Version 1.19.0. This release includes the below change:
- added supporting of H264 webcam
Release 1.19.1 - Version 1.19.1. This release includes the below change:
- fixed bug with pull mode
- fixed bug with open camera already in use
Release 1.20.0 - Version 1.20.0. This release includes the below change:
- added async/await invoking model
Release 1.21.0 - Version 1.21.0. This release includes the following change:
- replaced
DirectX9
rendering on DirectX11
rendering
Add the new demo programs:
Release 1.22.0 - Version 1.22.0. This release includes the following changes:
- added the new interface
ILogPrintOutCallback
Add the new demo programs:
Release 1.23.0 - Version 1.23.0. This release includes the following changes:
- improved Virtual Machine VMware support.
Release 1.24.0 - Version 1.24.0. This release includes the following changes:
- Add hardware button trigger event support.
Add the new demo programs:
History
- 11th August, 2015: Version 1
- 21st August, 2015: Update
- 31st August, 2015: Update
- 13th November, 2015: Update
- 7th March, 2016: Update
- 21st March, 2016: Update
- 12th April, 2016: Update
- 13th June, 2016: Update
- 1st August, 2016: Update
- 5th September, 2016: Update
- 3rd October, 2016: Update
- 19th December, 2016: Update
- 2nd January, 2017: Update
- 9th January, 2017: Update
- 16th January, 2017: Update
- 6th February, 2017: Update
- 13th March, 2017: Update
- 1st May, 2017: Update
- 10st July, 2017: Update
- 31st July, 2017: Update
- 21st August, 2017: Update
- 23rd October, 2017: Update
- 15th January, 2018: Update
- 5th March, 2018: Update
- 9th April, 2018: Update
- 13th August, 2018: Update
- 1st October, 2018: Update
- 19th November, 2018: Update
- 11th February, 2019: Update
- 15th April, 2019: Update
- 6th July, 2020: Update
- 20th July, 2020: Update
- 16th November, 2020: Update
- 1st February, 2021: Update
- 12th April, 2021: Update
- 6th September, 2021: Update
- 6th December, 2021: Update
- 4th July, 2022: Update
- 12th December, 2022: Update
- 1st May, 2023: Update
- 18st December, 2023: Update
This member has not yet provided a Biography. Assume it's interesting and varied, and probably something to do with programming.