Having trouble getting started.

Topics: Question
Dec 4, 2015 at 2:04 AM
Edited Dec 4, 2015 at 2:05 AM
I downloaded the source for container from: https://net7mma.codeplex.com/SourceControl/changeset/view/111009, and the latest source code and tried the following with both of them and also visual studio 2013 and 2015 too.

I unziped the file, opened the Media Solution, made a new executable project, set it as the startup project, and when I run it I get the following errors in n the Container:
_client.Connect(); in RtpClientUnitTests.cs
RtpClientUnitTests.TestInterleavedFraming in Program.cs

and in the latest source code I get tons of bugs, but that might be expected.

And to run the library at the most basic level I tried to use the code in the home page, but that didn't work either, can't find Rtsp, even though I already added it to the references.

It would be great if you could help, with any of these issues. I feel like I just don't understand it enough.
Dec 4, 2015 at 12:01 PM
You didn't list any errors, also Unit tests should be the startup project.

What bugs do you have? (You didn't list those either)

The code on the homepage is probably just using the wrong namespace with the latest code, I'm refactoring the libraries to not depend on system.drawing as well as adding better rtp profile support but things should work as well now as they did before in that regard.

Let me know what you need help with and I'll see what I can do.
Dec 4, 2015 at 2:04 PM
I have just updated the code again, please let me know if your still having trouble.
Dec 5, 2015 at 7:05 AM
Thanks. I'll be able to test it out again on Monday and let you know how it went.

The errors when I ran the code in the container release were:

_client.Connect(); in RtpClientUnitTests. -"Cannot find Connect()"
RtpClientUnitTests.TestInterleavedFraming in Program.cs - This was underlined, but I don't remember the error here.

So, were the steps I took correct? Sorry, I'm a beginner to... everything really.
So the steps were Unzip (The container release or latest?) => open the solution (Called media in visual studio 2013/2015) => add a new project (Console or windows form) => set as startup project- Copy the code on the homepage => run???

Sorry, I don't know enough to ask for help at this point. Thanks a lot though, for the fast reply and hep.
Dec 6, 2015 at 6:36 PM
I found the errors.

For both of the lines it's "Does not contain a definition for ..."
Dec 6, 2015 at 9:03 PM
Edited Dec 6, 2015 at 9:05 PM
I managed to get the code to work, I'm using the release "111003", and the code I used is:
            //Create the server optionally specifying the port to listen on
            Media.Rtsp.RtspServer server = new Media.Rtsp.RtspServer()
            //Create a stream which will be exposed under the name Uri rtsp://localhost/live/
            //From the RtspSource rtsp://
            Media.Rtsp.Server.MediaTypes.RtspSource source = new Media.Rtsp.Server.MediaTypes.RtspSource("YOO", "rtsp://");

            //Start the server and underlying streams
            //The server is now running,  you can access the stream  with VLC, QuickTime, etc  
I had to add the Media. before the Rtsp.RtspServer, and it runs. I'm confused now though, how am I able to open this using VLC?

I can open it by just streaming "rtsp://" but I could do that beforehand too. Ultimately I want to be able to control the device remotely, but for now, I just want to open up the screen through C#, is this possible?
Dec 6, 2015 at 9:57 PM
I tried "rtsp://localhost/live/test" in VLC, and it didn't work. Don't know why, server is running.

When I try:
            foreach (var item in server.MediaStreams)
It said false and stopped, but I fixed that with a sleep beforehand.

But I was hoping I could connect with the rtsp://localhost/live/test, but I cant for some reason.
Dec 7, 2015 at 1:19 AM
Also, sorry for bombarding you with questions, but is there any way I could get the video stream WITHOUT using a external program like VLC?
Dec 7, 2015 at 8:27 PM
It seems you don't understand how to access the stream after creating it.

'Media.Rtsp.Server.MediaTypes.RtspSource("YOO", "rtsp://");'

produces a rtsp stream which can be accessed on the local machine like so "rtsp://localhost/live/YOO"

The name given to the stream in the constructor essentially dictates the segment in the Uri after the 'live' segment.

That should clear that up.

As far as the source not being ready, this could be a variety of things, it shouldn't take much longer than when the call returns but the work is started on another thread which implies that there is a context switch which needs to occur, if your already using a lot of threads than this has the potential to take a bit longer.

If you want to wait for a particular stream to start synchronously then I would advise calling the 'Start' method on the stream before adding it to the server, at which point the call happens on the same thread and when it returns you can check 'Ready' on the source directly without needing to 'Sleep'

Lastly for 'seeing' the video, it depends on the Codec of the media your using.

Right now at this very instant only JPEG is supported and only using System.Drawing. (RFC2435)

I have plans to remove the dependency on System.Drawing soon and only use classes implemented in the library, I will then provide wrapper classes for the various platforms to allow them to inter-operate with the System.Drawing classes.

With that being said, some have used cscodec as well as various other managed codec implementations available to encode or decode data in a particular codec, for instance h.264 packetization as well as depacketization is complete which means you can take a large buffer of H.264 data and send it out easily or you can create a 'Stream' which can be fed to a decoder from a series of incoming packets.

L8 and L16 (audio) is supported because they don't have a special format, you can easily just use the RtpFrame and Assemble them.

Other types of audio are also support e.g. AAC / RFC3640

Just about every Rtp profile is implemented, if you find one that's not let me know and I will be sure to add it.

Where RTP is used in this process is before the decoder gets the data, there is the process of removing the RtpHeader from each RtpPacket as well as in some cases creating new data which must precede the payload of the RtpPacket in the data which eventually gets sent to a Decoder to be decoded.

If you post the SessionDescription from your source I will be able to provide more insight.

I hope that answered your questions, let me know if you need anything else.
Marked as answer by juliusfriedman on 12/7/2015 at 1:27 PM
Dec 7, 2015 at 11:59 PM
produces a rtsp stream which can be accessed on the local machine like so "rtsp://localhost/live/YOO"
Sorry, I did try that (But with test instead of YOO after I renamed it :D), in VLC - Open Network Stream I got:
Connection failed:
VLC could not connect to "localhost:554".
Your input can't be opened:
VLC is unable to open the MRL 'rtsp://localhost/live/test'. Check the log for details.

And, so to get the video to open in just C# using you library is possible, but very difficult? Sorry, I'm new to all of this stuff, I didn't even know what RTSP stood for until Friday, and I still know almost nothing about it.

Also, how do I post the SessionDescription?
Dec 8, 2015 at 12:05 AM
The VLC log says:

core error: open of `rtsp://' failed
core debug: dead input
core debug: changing item without a request (current 0/1)
core debug: nothing to play
qt4 debug: IM: Deleting the input

You seem really helpful and so does the library, but do you think it might be eaiser for me to use C++ to do this?
Ultimately I want to be able to remotely control another device on the same network (But I want the application to work on windows phones)
Dec 8, 2015 at 7:43 PM
The 'SessionDescription' can be found as a property of the 'RtspClient', it is serialized as a string quite easily using the 'ToString' method, it's property value is assigned after the call to 'SendDescribe'.

To accurately diagnose your issues I will need to see a small sample of how you create the stream and how you add it to the server, although without seeing anything I imagine you are running this code from Windows where sometimes Windows Media Player already has a service listening on port 554.

Here is a link to a post which will tell you how to check which process is using what what ports @StackOverflow

Lastly, do I think it would be easier for you to do this in C++?

Do what exactly?

I don't really know what you are doing so it's hard to say.

Typically media decoding needs to be performed as quickly as possible and often there are specific system devices which perform the decoding of the data and not the CPU, this would be the GPU for Images and Video and the Sound Card for Audio.

Those devices often can decode the data much faster than the CPU and thus reduce overhead and increase performance.

What I can say is that there is no reason to use C++ over C# for really anything nowadays especially when SIMD support has made it to .Net, the other reason one would use C++ would be to use some type of assembly\machine code which has no other way of being accessed, e.g. a processor specific feature; where some processors may have a special function which can be called which may be specially relevant to media decoding or they may have an on-board GPU which can handle the routine much faster than would normally take to execute in the CPU pipeline without such an instruction (known as intrinsic) or special device.

If you need to access those functions you could always write a small C++ library and then inter-operate with that library in C#.

What it comes down to more or less is code availability, there are already a lot of libraries in C++ which can encode and decode video such as LibAv or GStreamer where as in C# there is usually only bindings to such libraries such as FFMPEGSharp or GStreamer#.

As you will see using either of those bindings will allow you to decode video quite easily although they require the C++ libraries to be built and present for the bindings to work.

Keep in mind the bindings may expose GDI objects in the form of 'Bitmap' which will only work if 'System.Drawing' is available, e.g. in Silverlight there is no 'System.Drawing'.

This is why I am making the effort to not rely on 'Bitmap' and I am creating my own set of classes so when a developer has to target a different platform they are not stuck rewriting display code, they can just create the objects for the platform paradigm they desire without having to change the underling code logic.

All Encoders and Decoders will then work with those classes to provide a better overall experience which can then be derived to use external libraries where desired for performance.

Hopefully that sums everything up.

Let me know if you have any other questions.
Marked as answer by juliusfriedman on 12/8/2015 at 12:43 PM
Dec 9, 2015 at 3:28 AM

I'll get back to you if I have more questions :D
Dec 9, 2015 at 9:21 PM
No problem, one is glad to be of service.