Grab Frame from H.264 RFC6184Stream

Oct 8, 2014 at 5:55 AM
I have been able to download the latest code and build. i am able to capture live stream from Grandstream IP Camera. this library work very well but i am stuck in grabbing frame from LIVE RTSP stream. Media format is 96 and i can get each frame in RtpFrameChanged event, i am using RFC6184Stream and Depacketize it but failed to save each frame in separate file.
below is my code.

void m_RtpClient_RtpFrameChanged(object sender, Media.Rtp.RtpFrame frame)
    {
        try
        {
            if (!frame.Complete) return;

            Media.Rtp.RtpClient.TransportContext tc = Client.m_RtpClient.GetContextBySourceId(frame.SynchronizationSourceIdentifier);

            if (tc == null) return;

            Media.Sdp.MediaDescription mediaDescription = tc.MediaDescription;

            if (mediaDescription.MediaType == Media.Sdp.MediaType.audio)
            {
                //throw new NotImplementedException();
            }
            else if (mediaDescription.MediaType == Media.Sdp.MediaType.video)
            {
                if (mediaDescription.MediaFormat == 26)
                {
                    //OnFrameDecoded(m_lastFrame = (new Rtp.JpegFrame(frame)).ToImage());
                }
                else if (mediaDescription.MediaFormat >= 96 && mediaDescription.MediaFormat < 128)
                {
                    if (frame.IsEmpty)
                    {
                    }
                    else if (frame.IsMissingPackets)
                    {
                    }
                    else
                    {
                        var fft = (new Media.Rtsp.Server.Streams.RFC6184Stream.RFC6184Frame(frame));
                        fft.Depacketize();
//I want this frame in Sysytem.drawing.Bitmap here ???????????????????????
                        //System.IO.MemoryStream ms = new System.IO.MemoryStream(fft.Buffer.ToArray());
                        //Image img = Image.FromStream(ms);

                        //img.Save("E:\\\abc\\" + cou.ToString() + ".jpg", System.Drawing.Imaging.ImageFormat.Jpeg);
                        //cou++;

                    }
                }
                else
                {
                    //0 - 95 || >= 128
                    //throw new NotImplementedException();
                }
            }
        }
        catch
        {
            return;
        }
    }

Please any sample source code will be highly appreciated.
Thanks
Coordinator
Oct 8, 2014 at 5:41 PM
RFC6184 Depacketization is under way and will be completed when the FileStreams can read samples, subsequently Playback from files will also be supported at such time.

Additionally you will also need a decoder to get a Bitmap, how you do that will be up to you until Decoders are implemented for your format.
Marked as answer by juliusfriedman on 10/8/2014 at 10:41 AM
Oct 23, 2014 at 10:23 AM
Hello Boss,
i saw that MxfReader is now completed, can you please check RFC6184 Depacketization and decoders??

Thanks
Coordinator
Oct 24, 2014 at 1:23 AM
Not at the moment, the Demuxer release should be what your looking for. It should be a few weeks out at the most.
Nov 8, 2014 at 6:24 AM
Edited Nov 12, 2014 at 6:42 AM
Boss i saw that in release 110005 you mention fixes. i use Demuxer and decode the frame, but still didn't get any success. i even added SPS and PPS and use CSCODECS from https://net7mma.codeplex.com/discussions/566857 but didn't get any image. i am trying to grab an RTSP stream from Grandstream camera using below link.
All i want is to save each frame in a folder. can you please help me how to grab frames?
Coordinator
Nov 8, 2014 at 8:27 PM
Try the MJPEG stream if your camera provides one, I am going to be working on the Depacketize method shortly but nothing right now until I have Container completed which should be within a few days.
Coordinator
Nov 11, 2014 at 6:16 PM
Edited Nov 11, 2014 at 6:17 PM
Okay, give the 110052 changeset a go. I think I fixed the issue. Let me know if it needs further attention
Marked as answer by juliusfriedman on 11/11/2014 at 11:16 AM
Nov 12, 2014 at 7:47 AM
Dear Juliusfriedman thank you so much for update, i try the 110053 with cscode (https://github.com/soywiz/cscodec) as you mention in https://net7mma.codeplex.com/discussions/499989
but still failed to save/grab frame from H.264 stream. will be you so kind to check my below code of frame grabbing please.
int cou = 1;
var ReceivedFrame = (new Media.Rtsp.Server.Media.RFC6184Media.RFC6184Frame(frame));
                        if (ReceivedFrame.Complete == true)
                        {
                            ReceivedFrame.Depacketize();

                            if (first_times)
                            {
                                SPS_PPS_code();
                                AppendAllBytes("E:\\abc\\" + cou.ToString(), PPS_SPS);
                                first_times = false;
                            }
                            AppendAllBytes("E:\\abc\\" + cou.ToString(), ReceivedFrame.Buffer.ToArray());

                            var FrameDecoder = new cscodec.h264.player.FrameDecoder(System.IO.File.OpenRead("E:\\abc\\" + cou.ToString()));
                            while (FrameDecoder.HasMorePackets)
                            {
                                var decoded_frame = FrameDecoder.DecodeFrame();
                                var Image = cscodec.FrameUtils.imageFromFrame(decoded_frame);
                                Image.Save("E:\\abc\\" + cou.ToString() + ".jpg");
                            }
                        }
//////////////////////////////////////////////////////////
byte[] PPS_SPS;
    bool first_times = true;
    public static void AppendAllBytes(string path, byte[] bytes)
    {
        //argument-checking here.

        using (var stream = new System.IO.FileStream(path, System.IO.FileMode.Append))
        {
            stream.Write(bytes, 0, bytes.Length);
        }
    }

    void SPS_PPS_code()
    {
        byte[] slice_code = { 0x00, 0x00, 0x00, 0x01 };

        byte[] SPS = Encoding.Unicode.GetBytes("Z0LAHtkDxWhAAAADAEAAAAwDxYuS");
        byte[] PPS = Encoding.Unicode.GetBytes("aMuMsg==");

        PPS_SPS = add_byte_array(slice_code, SPS);
        PPS_SPS = add_byte_array(PPS_SPS, slice_code);
        PPS_SPS = add_byte_array(PPS_SPS, PPS);
    }
    private byte[] add_byte_array(byte[] a, byte[] b)
    {
        byte[] c = new byte[a.Length + b.Length];
        System.Buffer.BlockCopy(a, 0, c, 0, a.Length);
        System.Buffer.BlockCopy(b, 0, c, a.Length, b.Length);
        return c;
    }

Thank a lot for such a nice Library :)
Coordinator
Nov 13, 2014 at 12:00 AM
You may not need to put the SPS and PPS if it was contained in the RFC6184Frame but I don't think it can hurt.

E.g. I may change the Depacketize method to have an out bool contains spsOrpps so people know.

110071 Should further fix any issues, let me know if I am getting closer!
Nov 13, 2014 at 6:11 AM
I removed the SPS and PPS but still facing the same error. when i add the SPS and PPS it load/Constructor the decoder correctly but give error in DecodeFrame method. while on removing SPS and PPS it give the same error on Loading/Construction. please have a look at attached image.

Image

Should i share a test sample program with you so that you may check it and make it perfect ?

Thank you so much.
Coordinator
Nov 13, 2014 at 2:38 PM
Edited Nov 13, 2014 at 2:55 PM
I have updates the code, 110082 should resolve this issue.

If you have a test program which knows what the data should look like before and after packetization then that will help, otherwise its not going to be useful.

Give the latest code a run and let me know if I should think about adding the way to determine if the SPS, PPS and SEI are included.

And in 110084 I added a way to determine if the SPS and PPS and SEI are included, it may need some work so let me know!
Nov 13, 2014 at 4:00 PM
Edited Nov 14, 2014 at 8:34 AM
Thank you for the update, i spend all my day to test different demuxer/Decoder (FFMPEG etc) but nothing work. as per your instruction i test 110084 and found the same error.
Okay i will try to show you the data before and after, but in the meanwhile you may try below RTSP stream. it is the same stream on which i am testing it.

Thanks Again and waiting for your response.
Coordinator
Nov 13, 2014 at 6:33 PM
Thanks for the test camera. Okay, 110091 has more fixes.

It seems I wasn't handling Fragmentation Units correctly, I have adjusted the code yet again.

Hopefully this should do it!

let me know!
Nov 13, 2014 at 6:56 PM
Boss i try 110091 and still facing same issue. i am testing it with your test code name RtspInspector which is window form based. in the RtspInspector.cs i add the above RtpFrameChanged event. i can share my screen or record the video on your green signal.

Waiting for your kind response....
Coordinator
Nov 13, 2014 at 7:13 PM
Edited Nov 13, 2014 at 7:17 PM
It might have something to do with csCodec not being complete, I really am not completely sure.

Try 110094, I would verify with VLC rather than csCodec just incase.

If there is still a problem in VLC please include any error messages so I can use them to try and figure out what is going wrong.

Thanks!
Coordinator
Nov 13, 2014 at 9:41 PM
Edited Nov 13, 2014 at 9:45 PM
110099 or 110100 might actually fix it, I have read the RFC a few more times and ensured that everything should be correct.

I have also added support for RFC6185 and RFC6190 if needed.

Let me know if everything is working as expected!
Nov 14, 2014 at 8:33 AM
Edited Nov 16, 2014 at 5:02 PM
Sorry for the delay, i just try 110103 in this release RFC6184 , RFC6185, RFC6190 has an error "Object reference not set to an instance of ..." while RFC6416 is working but again same error. i have created a small app with only 10 line of code which replicate the error. all i want is to save the frame in a folder. please have a look...



Thanks again for all your hard work.
Coordinator
Nov 14, 2014 at 3:56 PM
Hello, I am sorry for all the trouble.

I have downloaded your example and I cannot get a Object reference not set to an instance of an object exception so I cannot replicate it.

I am going to run some tests on Depacketization now to see if I can recognize anything.

I will let you know what I find.
Coordinator
Nov 14, 2014 at 5:41 PM
Edited Nov 14, 2014 at 9:34 PM
Okay, please see the attached code.

I have verified it works for Big Buck Bunny as well as quite a few cameras.
if (thisIsTheFirstWriteToTheFile)
                    {
                        Media.Sdp.SessionDescriptionLine fmtp = context.MediaDescription.FmtpLine;

                        byte[] sps = null, pps = null;

                        foreach (string p in fmtp.Parts)
                        {
                            string trim = p.Trim();
                            if (trim.StartsWith("sprop-parameter-sets=", StringComparison.InvariantCultureIgnoreCase))
                            {
                                string[] data = trim.Replace("sprop-parameter-sets=", string.Empty).Split(',');
                                sps = System.Convert.FromBase64String(data[0]);
                                pps = System.Convert.FromBase64String(data[1]);
                                break;
                            }
                        }

                        bool hasSps, hasPps, sei, slice, idr;

                        hframe.Depacketize(out hasSps, out hasPps, out sei, out slice, out idr);

                        using (var stream = new System.IO.MemoryStream())
                        {
                            if (!hasSps && sps != null)
                            {
                                stream.Write(new byte[] { 0x00, 0x00, 0x00, 0x01 }, 0, 4);

                                stream.Write(sps, 0, sps.Length);
                            }

                            if (!hasPps && pps != null)
                            {
                                stream.Write(new byte[] { 0x00, 0x00, 0x00, 0x01 }, 0, 4);

                                stream.Write(pps, 0, pps.Length);
                            }

                            hframe.Buffer.CopyTo(stream);

                            stream.Position = 0;

                            //Write All Bytes stream
                        }
                    }else 
                        hframe.Depacketize();
                        //Append All Bytes hFrame.Buffer.ToArray()
Using 'cscodec' it would look something like this.
FrameDecoder frameDecoder = null;

        void Client_RtpFrameChanged(object sender, Media.Rtp.RtpFrame frame)
        {
            try
            {

                var context = ((Media.Rtp.RtpClient)sender).GetContextByPayloadType(frame.PayloadTypeByte);

                if (context == null || context.MediaDescription.MediaType != Media.Sdp.MediaType.video) return;

                using (Media.Rtsp.Server.Media.RFC6184Media.RFC6184Frame hframe = new Media.Rtsp.Server.Media.RFC6184Media.RFC6184Frame(frame))
                {
                    if (frameDecoder == null)
                    {
                        Media.Sdp.SessionDescriptionLine fmtp = context.MediaDescription.FmtpLine;

                        byte[] sps = null, pps = null;

                        foreach (string p in fmtp.Parts)
                        {
                            string trim = p.Trim();
                            if (trim.StartsWith("sprop-parameter-sets=", StringComparison.InvariantCultureIgnoreCase))
                            {
                                string[] data = trim.Replace("sprop-parameter-sets=", string.Empty).Split(',');
                                sps = System.Convert.FromBase64String(data[0]);
                                pps = System.Convert.FromBase64String(data[1]);
                                break;
                            }
                        }

                        bool hasSps, hasPps, sei, slice, idr;

                        hframe.Depacketize(out hasSps, out hasPps, out sei, out slice, out idr);


                        using (var stream = new System.IO.MemoryStream())
                        {
                            if (!hasSps && sps != null)
                            {
                                stream.Write(new byte[] { 0x00, 0x00, 0x00, 0x01 }, 0, 4);

                                stream.Write(sps, 0, sps.Length);
                            }

                            if (!hasPps && pps != null)
                            {
                                stream.Write(new byte[] { 0x00, 0x00, 0x00, 0x01 }, 0, 4);

                                stream.Write(pps, 0, pps.Length);
                            }

                            stream.Position = 0;

                            playStream(stream);
                        }
                    }
                    else hframe.Depacketize();

                    playStream(hframe.Buffer);
                }
            }
            catch
            {
                return;
            }
            
        }

        public static void CenterForm(Form theForm)
        {
            theForm.Location = new Point(
                Screen.PrimaryScreen.WorkingArea.Width / 2 - theForm.Width / 2,
                Screen.PrimaryScreen.WorkingArea.Height / 2 - theForm.Height / 2);
        }

        public bool playStream(Stream fin)
        {
            if (frameDecoder == null) frameDecoder = new FrameDecoder(fin);
            else frameDecoder.SetStream(fin);
            try
                {
                    while (true)
                    {
                        var picture = frameDecoder.DecodeFrame();

                        var Width = picture.imageWidthWOEdge;
                        var Height = picture.imageHeightWOEdge;

                        if (this.frame.ClientSize.Width < Width || this.frame.ClientSize.Height < Height)
                        {
                            this.frame.Invoke((Action)(() =>
                            {
                                this.frame.ClientSize = new Size(Width, Height);
                                CenterForm(this.frame);
                            }));
                        }
                        this.frame.CreateGraphics().DrawImage(FrameUtils.imageFromFrameWithoutEdges(picture, Width, Height), Point.Empty);
                    }
                }
                catch (EndOfStreamException)
                {
                    return false;
                }
        }
This works as long as the Codec type is indicated as 'a=rtpmap:XX H264/90000' if the codec uses RCDO (RFC6185) or SVC(RFC6190) then the other classes Frame would be used.

For 'a=rtpmap:XX MP4V-ES/90000' then you would use RFC6416 and the process for writing does not require the sps and pps portion but may required the 'iods' or other data.

I will address that when required.

When testing your camera I noticed it sends the SPS and PPS multiple times and the csCodec code doesn't seem to handle this well, you may need to patch this up in order for your stream to work with it or just save the files to disk / memory and use another decoder.

E.g. Test with another stream or Big Buck Bunny and you will see!

Let me know if you have any further issues.
Marked as answer by juliusfriedman on 11/14/2014 at 10:42 AM
Nov 14, 2014 at 10:00 PM
hi, do you have the downloadable code for your first example (not using cscodec) ? Thank you.
Coordinator
Nov 14, 2014 at 10:22 PM
The latest release supports this, the example above shows how to create a file and then append the bytes before the cscodec example.

What other type of example do you need and what do you mean downloadable?
Nov 14, 2014 at 10:25 PM
oh i meant the latest version of your library but I just realized that i did not get the updated changes from svn.
Nov 14, 2014 at 11:06 PM
how do i get the audio to be recorded also along with the video ? Thank you for help! Great library.
Coordinator
Nov 14, 2014 at 11:11 PM
Edited Nov 14, 2014 at 11:16 PM
It depends on what your trying to achieve, the sdp usually indicates the codec of the stream in the media description.

Depending on the format you might be able to use the Assemble method on RtpFrame.

I will add support for the audio formats which require depacketization soon, then the process to write will reflect that which is required for others.

Post the sdp and I will see if I can address it sooner.

If its just pcm or something audio supports without a rtp profile header e.g. aulaw or mulaw you can use n audio.
https://naudio.codeplex.com/discussions/442970
Marked as answer by juliusfriedman on 11/14/2014 at 4:43 PM
Nov 14, 2014 at 11:23 PM
here is the SDP:

{v=0
o=- 1416010681503233
1416010681503233 IN IP4 192.168.109.179s=Media Presentatione=NONEb=AS:50032a=control:*a=range:npt=0.000000-t=0m=video 0 RTP/AVP 96c=IN IP4 0.0.0.0b=AS:50000a=framerate:30.0a=transform:1,0,0;0,1,0;0,0,1a=control:trackID=1a=rtpmap:96 H264/90000a=fmtp:96 packetization-mode=1; profile-level-id=4D4029; sprop-parameter-sets=Z01AKZpmAoAy2AtQEBAQXpw=,aO48gA==m=audio 0 RTP/AVP 97c=IN IP4 0.0.0.0b=AS:32a=control:trackID=2a=rtpmap:97 mpeg4-generic/16000/1a=fmtp:97 streamtype=5; profile-level-id=15; mode=AAC-hbr; config=1408; sizeLength=13; indexLength=3; indexDeltaLength=3; profile=1; bitrate=32000;}

What i was trying to achieve is to record both audio/video from the rtsp stream to h264->mp4.
Coordinator
Nov 14, 2014 at 11:33 PM
That a rfc3640 sdp.

http://tools.ietf.org/html/rfc3640#page-10

I can't tell if the decoder needs the info since depacketization is not really mentioned and the format is only documented.

You can probably use Assemble and just give the decoder the stream, if not you would pass assemble the number of bytes to skip since it seems that is static to each packet also.

Let me know if im not correct
Nov 14, 2014 at 11:35 PM
also i've added this after your first example:

stream.Position = 0;

//Write All Bytes stream

using (var fs = new FileStream(string.Format("C:\temp\file.h264"), FileMode.Append))
                {
                    byte[] buffer = new byte[stream.Length];

                  //  stream.Seek(0, SeekOrigin.Begin);
                    stream.Read(buffer, 0, (int)stream.Length);
                    fs.Write(buffer, 0, buffer.Length);
                    fs.Flush();
                    fs.Close();
                }
I can play the video through vlc but i'm still getting periodc 'pixelations', am i missing something ?
Awaiting your response & thank you.
Coordinator
Nov 14, 2014 at 11:42 PM
Edited Nov 14, 2014 at 11:44 PM
You can just use stream.Copy To(fs) and when disposing flies and close get called for you.

The pixelation shouldn't be related to this library as it just puts the found data in the file, it may however be due to order of the frames decoding order however it shouldn't matter especially since the sdp indicated mode 1.

Also the decoder should be able to handle out of order nals and slices.

Anyway let me know if you run into anything else or find out something else about audio depacketization that needs attention.
Marked as answer by juliusfriedman on 11/14/2014 at 4:42 PM
Nov 14, 2014 at 11:58 PM
Could you post an example on how to use Assemble if you have time ?
Would the RFC6184 still work then ?

I've changed the code to CopyTo, that worked as well but is still doing the same thing (pixelation).
Coordinator
Nov 15, 2014 at 12:03 AM
Edited Nov 15, 2014 at 12:05 AM
Assemble is a method of RtpFrame.

Just call it without any parameter for this sdp or maybe with (false, numbytes) to get an enumerable, call ToArray and append it.

And this would only be for the audio frames, the video remains the same as it was.

To get both in a single file use the rtpdump format until other containers are ready for writing.

Also reset your vlc settings see if that helps pixelation.

Let me know if you need anything else.
Marked as answer by juliusfriedman on 11/14/2014 at 5:05 PM
Coordinator
Nov 15, 2014 at 1:13 AM
After reading
http://thompsonng.blogspot.com/2010/03/rfc-3640-for-aac.html?m=1

It seems that the amount of bytes comes from the sdp.

You must calculate and skip it, I will see about providing a class to help with that profile soon but it will also require info from the sdp like rfc 6416 needs the profile id.

I'll keep you updated.
Coordinator
Nov 15, 2014 at 5:55 PM
I have added support in 110118, it should allow for correct depacketization. Let me know if you run into any issues!
Nov 16, 2014 at 5:32 PM
Hi juliusfriedman,
Thank you so, so much for all the support you've given me. i have spend two days on cscodec to make that camera working perfectly. i got what i want and now i will put a load of 1000 camera on this library and will let you know about the result.
once again Thanks a lot. you the man :)
if i can be of any help, you just let me know.
Coordinator
Nov 16, 2014 at 5:50 PM
Hi, thanks for the positive result and information!

If you can post your patches up I would appreciate it, nothing will be too soon but I will be taking a look when I start to implement decoding which is not to far off, but before that I really need to finish the container formats, rtp tools programs, small changes in the RtpClient and RtspServer and separation of the frame from the stream classes.

I also need the Media namespace from the rtspserver to change to something which is more suitable for transport other than rtp.

change the base classes to not use rfc2435 and use RtpSink

Change rtpsink to rtpmedia so its more suitable for that.

Implement something inbetween mediafilestream and stream to allow containers to be more useful when streaming.

And then seperate out the sever from the stream implementations.

I need to see if I can also start work on the general class which will be used for all encoder and decoders e.g
A picture or something.

I also need to do the same for sound and if possible derivative from and expose system classes for Wave playback to allow samples and whatnot without getting too deep into something which requires hardware support.

And probably more im forgetting. ..

The more work people put in helps even if its only basic or suggestions or bug reports or api requests.

Documentation, examples and votes on issues and reviews of releases are also all very welcome.

If I can do anything else let me know!
Marked as answer by juliusfriedman on 11/16/2014 at 10:50 AM
Nov 17, 2014 at 7:47 AM
Edited Nov 18, 2014 at 2:13 AM
Hey Guys, I am using cscodec but I could not use setstream function in this line "frameDecoder.SetStream(fin);" and the "frame" is picturebox element right???

The depacketized image is still not right with 110122 (see my attached code and image)???

https://drive.google.com/file/d/0B141s-K48fmVX050MEJOcGViUW8/view?usp=sharing
 void Client_RtpFrameChanged(object sender, Media.Rtp.RtpFrame frame)
        {
            try
            {

                var context = ((Media.Rtp.RtpClient)sender).GetContextByPayloadType(frame.PayloadTypeByte);

                if (context == null || context.MediaDescription.MediaType != Media.Sdp.MediaType.video) return;

                using (Media.Rtsp.Server.Media.RFC6184Media.RFC6184Frame hframe = new Media.Rtsp.Server.Media.RFC6184Media.RFC6184Frame(frame))
                {
                    if (first_times == true)
                    {
                        Media.Sdp.SessionDescriptionLine fmtp = context.MediaDescription.FmtpLine;

                        byte[] sps = null, pps = null;

                        foreach (string p in fmtp.Parts)
                        {
                            string trim = p.Trim();
                            if (trim.StartsWith("sprop-parameter-sets=", StringComparison.InvariantCultureIgnoreCase))
                            {
                                string[] data = trim.Replace("sprop-parameter-sets=", string.Empty).Split(',');
                                sps = System.Convert.FromBase64String(data[0]);
                                pps = System.Convert.FromBase64String(data[1]);
                                break;
                            }
                        }

                        bool hasSps, hasPps, sei, slice, idr;

                        hframe.Depacketize(out hasSps, out hasPps, out sei, out slice, out idr);


                        using (var stream = new System.IO.MemoryStream())
                        {
                            if (!hasSps && sps != null)
                            {
                                stream.Write(new byte[] { 0x00, 0x00, 0x00, 0x01 }, 0, 4);

                                stream.Write(sps, 0, sps.Length);
                            }

                            if (!hasPps && pps != null)
                            {
                                stream.Write(new byte[] { 0x00, 0x00, 0x00, 0x01 }, 0, 4);

                                stream.Write(pps, 0, pps.Length);
                            }

                            hframe.Buffer.CopyTo(stream);

                            stream.Position = 0;

                            decode_stream(stream);


                            first_times = false;
                        }
                    }
                    else hframe.Depacketize();

                    decode_stream(hframe.Buffer);

                }
            }
            catch
            {
                return;
            }

        }
        //---------------------------------------------------------------------------//
Coordinator
Nov 17, 2014 at 2:27 PM
Hey Leo,

Yes it does but cscodec ia buggy / quirky.

I added set stream to cscodec.

It sets fin to the given stream and hasmorenal = true.

I also had a few patches in the sps and pps reading routines as well as a few others.

Silent warrior probably will post his code if you need, but remember that I will be totally rewriting the decoder soon as I handle the other issues and there are lots of changes so don't get too tied to cscodec etc.

Also please try to seperate issues in cscodec from my library.
Marked as answer by juliusfriedman on 11/17/2014 at 7:27 AM
Nov 17, 2014 at 2:55 PM
Hi Silent warrior,
I added set stream to cscodec.

It sets fin to the given stream and hasmorenal = true.
Could you post your sample code about how to add "setstream()" function in to cscodec, a lot of people are waiting for your sample code.....

@ Julius: I know you will change decoder in future but I need some results soon!!!

Thanks for your quick responds!!!
Coordinator
Nov 17, 2014 at 3:03 PM
The issue is that the framedecoder makes a new state every time its created.

You either have to be able to write to the stream before giving it to the decoder or fast enough that it never runs out of data.

Or
Make a method SetStream (Stream ins){ this.fin = ins; _hasmorenal = true; }

Which allows it to resume without making a new state.
Nov 17, 2014 at 3:29 PM
Julius,

Ok, it is working.... Thanks a lot for your great library. I am going to use your library for playback cameras in more than 300 cars.
Coordinator
Nov 17, 2014 at 3:32 PM
Great!

Thanks for the info!

There are a lot of updates in the works so keep an eye out!

Let me know if you run into anything else.
Coordinator
Nov 17, 2014 at 3:41 PM
And keep in mind that cscodec is now part of Cspspemu.

The latest code is at

https://github.com/soywiz/cspspemu/tree/master/Hle/CSPspEmu.Hle.Media

And has support fora bit more than what cscodec had Alone.
Nov 17, 2014 at 4:47 PM
ok great, the changes definitely worked, i am no longer getting pix elated frames.
it seems that i might be missing couple of frames though while i am playing back.
in your example, i am copying the stream to a file after you set the stream position to 0.

should i be doing anything after depacketize ?
thank you for the support !
Coordinator
Nov 17, 2014 at 4:50 PM
Edited Nov 17, 2014 at 4:53 PM
You might want to check if frame.iscomplete before writing it, Other than that no.

Sometimes a early packet arrives and causes frame changed early, but subsequent event will indicate the frame is complete if the packet arrives later.

no problem lemme know if you need anything else.
Nov 17, 2014 at 5:09 PM
Edited Nov 17, 2014 at 5:25 PM
Hello Guys,
i am sorry actually i was out of country and just came back.
juliusfriedman:- I can assign some resource which will help you on documentation. i am testing it with more then 1000 cameras in the field in USA and UK. i saw your list and want to discuss it on email "Khammadm@hotmail.com", you may drive/Directed me....

eloade:- Actually my code will not work for you as i am facing this issue in some china based H264 NVR. that streams are actually not a standard stream like AXIS camera or other manufacture provide. and that decoder is actually not working with Standard H264 stream like AXIS cameras. that will work only for that one specific china based H264 encoder (total two manufacture which are providing non standard RTSP stream)...
please check the below function and it will make a .h264 file without need of setstream() function.
public void AppendAllBytes(System.IO.Stream stream)
    {
        //argument-checking here.
        lock (lockObject)
        {
            using (var fs = new System.IO.FileStream(string.Format("E:\\abc\\file.h264"), System.IO.FileMode.Append))
            {
                byte[] buffer = new byte[stream.Length];

                //  stream.Seek(0, SeekOrigin.Begin);
                stream.Read(buffer, 0, (int)stream.Length);
                fs.Write(buffer, 0, buffer.Length);
                fs.Flush();
                fs.Close();
            }
        }
    }
i will try to write a test code and will share it with you. but first i need to check all the updated code....
Coordinator
Nov 17, 2014 at 5:12 PM
Not a problem.

There is also an implementation of Avs at

https://avsreflector.codeplex.com/SourceControl/latest

Avs is not h264 but its very similar without the motion compensation.

Let me know if you guys need anything else.
Nov 17, 2014 at 5:18 PM
Hi, please see attached code, it seems to be better, i'm getting a bit of pixelation at the beginning, could you confirm that it is correct ?
 void Client_RtpFrameChanged(object sender, Media.Rtp.RtpFrame frame)
        {

            if (!frame.Complete) return;


            var context = ((Media.Rtp.RtpClient)sender).GetContextByPayloadType(frame.PayloadTypeByte);
            Media.Rtsp.Server.Media.RFC6184Media.RFC6184Frame hframe = new Media.Rtsp.Server.Media.RFC6184Media.RFC6184Frame(frame);
            
            if (context == null || context.MediaDescription.MediaType != Media.Sdp.MediaType.video) return;
            if (bFirst == true)
            {
                Media.Sdp.SessionDescriptionLine fmtp = context.MediaDescription.FmtpLine;

                byte[] sps = null, pps = null;
                 
                foreach (string p in fmtp.Parts)
                {
                    string trim = p.Trim();
                    if (trim.StartsWith("sprop-parameter-sets=", StringComparison.InvariantCultureIgnoreCase))
                    {
                        string[] data = trim.Replace("sprop-parameter-sets=", string.Empty).Split(',');
                        sps = System.Convert.FromBase64String(data[0]);
                        pps = System.Convert.FromBase64String(data[1]);
                        break;
                    }
                }

                bool hasSps, hasPps, sei, slice, idr;
                hframe.Depacketize(out hasSps, out hasPps, out sei, out slice, out idr);

                byte[] result = hframe.Buffer.ToArray();

                using (var stream = new System.IO.MemoryStream(result.Length))
                {
                    if (!hasSps && sps != null)
                    {
                        stream.Write(new byte[] { 0x00, 0x00, 0x00, 0x01 }, 0, 4);

                        stream.Write(sps, 0, sps.Length);
                    }

                    if (!hasPps && pps != null)
                    {
                        stream.Write(new byte[] { 0x00, 0x00, 0x00, 0x01 }, 0, 4);

                        stream.Write(pps, 0, pps.Length);
                    }

                    hframe.Buffer.CopyTo(stream);

                    stream.Position = 0;

                    // Write All Bytes stream
                    decode_stream(stream);
                    bFirst = false;
                }
            }
            hframe.Depacketize();
            decode_stream(hframe.Buffer);

        }

        private void Stop_Click(object sender, EventArgs e)
        {
            client.StopPlaying();
      }

        public void decode_stream(Stream fin)
        {
            using (var fs = new FileStream(string.Format("C:\\temp\\test2.h264"), FileMode.Append))
                fin.CopyTo(fs);

        }
Coordinator
Nov 17, 2014 at 5:25 PM
Edited Nov 17, 2014 at 5:26 PM
What is result? The problem may be that your making the stream have extra data or truncated it.

See the examples above.

Your only difference is that you don't have to play the stream.

If you do then also see above but remember that the library only takes data and makes it decoder ready beyond that this class just passes the contained data to the buffer as is.

I also have static properties for the start codes available in the Codecs project.
Nov 17, 2014 at 5:34 PM
@eloade if you want to save the stream in .h264 then you will need only below function.

public void AppendAllBytes(System.IO.Stream stream)
    {
        //argument-checking here.
        lock (lockObject)
        {
            using (var fs = new System.IO.FileStream(string.Format("E:\\abc\\file.h264"), System.IO.FileMode.Append))
            {
                byte[] buffer = new byte[stream.Length];

                //  stream.Seek(0, SeekOrigin.Begin);
                stream.Read(buffer, 0, (int)stream.Length);
                fs.Write(buffer, 0, buffer.Length);
                fs.Flush();
                fs.Close();
            }
        }
    }
please check it and let us know about the result....
Coordinator
Nov 17, 2014 at 5:38 PM
Edited Nov 17, 2014 at 5:39 PM
Why create a buffer and then read and write and flush.

Flush and close get called with disposing.

Use stream. Copy To (fs);

That eliminates the buffer.

Thanks for the help though.

Additionally the filesystem locks itself you don't need synchronized context unless you keep the stream open and even then the fs will handle this.

And I thought I told him this in another thread :-D
Marked as answer by juliusfriedman on 11/17/2014 at 10:39 AM
Nov 17, 2014 at 5:42 PM
Edited Nov 17, 2014 at 5:43 PM
@julius & @silentwarrior, thank you for the help, it is working now. I appreciate your support....and your patience !
yes I've been using stream.CopyTo as well.

Now onto getting audio to work :)
Nov 17, 2014 at 5:51 PM
Yes you are right but actually i have to lock this for multiple camera access. right now i am working on it for leo....
Coordinator
Nov 17, 2014 at 5:53 PM
Edited Nov 17, 2014 at 9:23 PM
Yes, the audio will require naudio unless the codec is pcm then you can use SoundPlayer in System.Media without naudio.

Unfortunately I think your codec is mpeg4 generic but the good news is that I already implemented any depacketization.

I would also like to mention that system.Windows. media has a media player which does support mp3 among other formats, not sure about mpeg 4. (Es)

Once the audio decoder for the format is implemented then it will be totally possible to pass the decoded samples to a SoundPlayer and get audio output support.

Getting input audio is again another challenge and will require hardware support, the library will accept bytes and streams to accomplish this in a platform independent way so however YOU can get pcm audio samples you will then be able to encode and decode or playback.

Let me know if I can help.
Marked as answer by juliusfriedman on 11/17/2014 at 11:27 AM
Coordinator
Nov 17, 2014 at 5:55 PM
@ silent each camera would have its own file so I think you would be okay.

Let me know if I can be of any assistance.
Nov 17, 2014 at 9:18 PM
The audio codec is pcm.
question, to get the audio, when i startplaying the rtsp client, do i pass in Media.Sdp.MediaType.audio ? or when i pass .video it would also contain the audio ?
Coordinator
Nov 17, 2014 at 9:22 PM
It only sets up the type given so don't pass anything and it without set them all up.

E.g. start playing ()

If you run into any issues let me know otherwise once you have it working possibly post it up so I can use it as an example.
Marked as answer by juliusfriedman on 11/17/2014 at 2:22 PM
Nov 18, 2014 at 3:09 AM
Edited Nov 18, 2014 at 3:11 AM
Hello,
Does anyone know how to push the depacketized stream to VLC for online playback???

@julius: could you give me one example how to send PAUSE, TEARDOWN from client to server.
Nov 18, 2014 at 4:29 AM
@leo you may use nVLC, its a vlc wrapper which you can use in c#. i am using it and found it very accurate.
please note that use the latest version with name nVLC.XXXXX.
beside this you can also use FFMPEG.

let me know if you need anything else.
Nov 18, 2014 at 5:19 AM
Silient warrior, I am using latest VLC version but dont know how to push the depacketized stream (frame by frame). I expect it is real time running such as we pate the URL in commercial VLC software. Have you tried to do something like that??? Do you know any thread or sample code to resolve this problem???
Nov 18, 2014 at 6:37 AM
VLC may run with Memory input. although you can run it in c# application in background with URL but you may also run it with one frame.
if you want to use VLC, i will suggest you to grab the frame from the VLC using URL directly in media factory. it seems you want to grab the frame using this library and then passed it to VLC, while i think VLC can grab the frame in system.bitmap format. let me know if you need that code.
for Memory input in VLC please check below link.
http://www.codeproject.com/Articles/109639/nVLC
Nov 18, 2014 at 7:53 AM
Edited Nov 18, 2014 at 7:58 AM
Silient, You are right, I want to grab frame using Media library and push it to VLC for real time running.

Please, provide me your sample code!! Did you do somethings like this before??? Did you compare the image quality result from cscodec and VLC??? In my results, cscodec images are really bad...

I am writing ASP.net application with C# so the VLC web Plugin is using. Is VLC web plugin can be used in this case???
Coordinator
Nov 18, 2014 at 11:53 AM
The depacketization is only for decoding.

If you want to stream it back there is the rfc6184media class.
Nov 18, 2014 at 1:11 PM
I have tried to run the Big Buck Buny stream with VLC the image quality is much better than using Media library combine with cscodec. I think the image quality problem is come from cscodec. That why I am trying to combine your library with VLC..

In the first step, I just want to show online stream frames just like run RTSP on VLC software and These frames can be recorded when I want.
Coordinator
Nov 18, 2014 at 1:19 PM
Edited Nov 18, 2014 at 4:57 PM
@ Leo, pause can be created and sent but there is no predetermined method for it yet.

Also this library cant effect the quality for h264 etc because it doesn't decode. (Yet)

The other thing is that this library handles stap b fub and m taps but vlc doesn't.

So your best bet could be to combine this library and another decoder besides cscodec until I have a decoder.

If you want to make a stream from a h264 file then you will need a class to read the nals and give them to the packetize function.

If you need to do this from a camera its already done as in the rfc6184media class.
Marked as answer by juliusfriedman on 11/18/2014 at 6:19 AM
Nov 18, 2014 at 1:44 PM
Edited Nov 18, 2014 at 1:46 PM
@Julius,
  1. could you give me a sample or point out how to create PAUSE, TEARDOWN in test.cs in your library.
  2. I have tried my software with Big Buck Buny stream, it is working with poor image quality but with cameras (they connect to server according to MVDR device) the final result is not right... Does problem come from cscodec??? See the SDP information:
v=0
o=- 1416320924023397 1 IN IP4 60.251.157.47
s=Session streamed by DVR Stream
i=
t=0 0
a=tool:LIVE555 Streaming Media v2013.10.09
a=type:broadcast
a=control:*
a=source-filter: incl IN IP4 * 60.251.157.47
a=rtcp-unicast: reflection
a=range:npt=0-
a=x-qt-text-nam:Session streamed by DVR Stream
a=x-qt-text-inf:
m=video 0 RTP/AVP 96
c=IN IP4 0.0.0.0
b=AS:8192
a=rtpmap:96 H264/90000
a=fmtp:96 packetization-mode=1;profile-level-id=420028;sprop-parameter-sets=Z0IAKOkCw/I=,aM4xUg==
a=control:track1
m=audio 0 RTP/AVP 97
c=IN IP4 0.0.0.0
b=AS:256000
a=rtpmap:97 L16/16000
a=control:track2
Nov 18, 2014 at 2:06 PM
@leo did you try FFMPEG??
Nov 18, 2014 at 4:02 PM
I have not... Are you trying FFMPEG for decoding??? Actually, my stream is encode by ffmpeg (Live555).
Coordinator
Nov 18, 2014 at 4:16 PM
110136 Supports sending a Pause easily. If you need anything else let me know. There is already a SendTeardown method.

You can use those decoders by taking the result and instead of giving it to cscodec, writing it to a file or giving the stream to the library writing some interop.

You can also use Direct Show / Media Foundation or Silver Light...

I am going to be working on the decoder soon but I have a bit on my plate at the moment, keep an eye out and if I can be of assistance in the mean time let me know!
Marked as answer by juliusfriedman on 11/18/2014 at 9:56 AM
Nov 18, 2014 at 4:51 PM
Hey Silient, I think Directshow is promising decoder for this moment... We cooperate together to resolve cscodec problem???
Coordinator
Nov 18, 2014 at 4:55 PM
Edited Nov 18, 2014 at 4:56 PM
Direct show is deprecated, actually Media Foundation would be the best Api but it only works on Windows.

Why don't you guys help with this project?

What would you need to get started? If you need direction I can probably provide that as well.
Nov 18, 2014 at 5:13 PM
I just need online play back the video stream from server. your library could receive and depacketize stream but right now it still needs decoder for my purposes. I need to show some primary results to my boss for continuing project, I just have around two weeks to go so I cant wait until you finish your decoder.

what is best way for me?
Coordinator
Nov 18, 2014 at 5:17 PM
Im lost, online streaming is already supported.

If you wanna take a h264 file and then stream it out thats easy also.

Just look how rtpdump is handled, dervive a class for reading the nals and then packetize them with my library.
Nov 18, 2014 at 5:21 PM
@ Juliusfriedman, Boss i already explain you that i have resource( Technical writers, graphic designer, developers) and will be happy to help you in this project. just need your green signal boss.

@leo i still didn't check it with directshow, actually i am trying it with EMGUCV and OpenCV right now. Look if we spend time to implement Directshow or FFMPEG or VLC which will may work for you but not fruitful to this library, what if start working on decoder.....
Nov 18, 2014 at 5:23 PM
Edited Nov 18, 2014 at 5:26 PM
in the first step i just need show the stream video with acceptable quality ( i mean show video in picturebox frame by frame with good quality ) but cscodec seems not good enough for good video quality.
Coordinator
Nov 18, 2014 at 5:29 PM
Edited Nov 18, 2014 at 5:33 PM
I think that's an excellent idea.

Why don't you go ahead and do that, there is jcodec which is in java which can be useful but I would look more at avs reflector because thats closer to the api I would like to achieve.

Nsynth also has the h264 standard and api but no work done.

How about I finish the container stuff like I planned and when you have progress I will start to help by working on a common image / picture class which is efficient and extensible for display and conversion.

If you already have a picture class it will probably just reduce the work done in that class and use Simd if possible.

I will have to provide something similar for audio anyway but it will be far simpler to achieve.

Both audio and video should both then be encompassed by some type of sample interface.
Nov 18, 2014 at 5:33 PM
Edited Nov 18, 2014 at 5:42 PM
@silent, yep i think that is best way for us, but time is limited. how long do we spend for h264 decoder?

Are you trying ffmpeg in EmguOpenCv?
Coordinator
Nov 18, 2014 at 5:35 PM
For a full implementation at least 30 days.

You could also just use silver light.
Nov 18, 2014 at 5:43 PM
@leo did you check EmguCV? it can play any RTSP stream only bug is it disconnect after X time for which you can use this library to keep the stream alive. just check the frame quality and let me know if it solve your issue. i can provide a complete test code if you need?
Nov 18, 2014 at 5:48 PM
i have checked EmguOpenCV before, it had problem just like whats you said. OK, plz send me your sample code, we could double check.
Nov 19, 2014 at 5:34 PM
Sorry for the late reply, i am busy with some other project. check below code for EmguCV, please download version 3.x.xxx. i didn't upload test project because the dll was very heavy..
        Emgu.CV.Capture capture;
        CvInvoke.UseOpenCL = false;
        capture = new Emgu.CV.Capture(@"RTSP Stream URL here");
        //capture.SetCaptureProperty(Emgu.CV.CvEnum.CapProp.FrameHeight , 240);
        //capture.SetCaptureProperty(Emgu.CV.CvEnum.CapProp.FrameWidth , 320);

        capture.ImageGrabbed += new EventHandler(capture_ImageGrabbed);

        capture.Start();

        void capture_ImageGrabbed(object sender, EventArgs e)
    {
        try
        {
            Mat frame = new Mat();
            capture.Retrieve(frame, 0);
            this.BackgroundImage = frame.Bitmap;
        }
        catch (Exception ex)
        {
            //Exception
        }

    }
Now for grab complete decoded frame using VLC please check below code.
                    using Declarations;
                    using Implementation;
                    using LibVlcWrapper;
                    using Declarations.Media;
                    using Declarations.Players;


                    IVideoPlayer player;
                    IMediaPlayerFactory factory = new MediaPlayerFactory(false);
                    player = factory.CreatePlayer<IVideoPlayer>();
                    IMedia media = factory.CreateMedia<IMedia>("RTSP Stream URL Here");

                    player.CustomRenderer.SetFormat(new BitmapFormat(_frameWidth, _frameHeight, ChromaType.RV24));
                    player.CustomRenderer.SetCallback(FrameCallback);

                    player.Open(media);
                    GC.KeepAlive(factory);
                    GC.KeepAlive(player);
                    GC.KeepAlive(media);
                    player.Mute = true;

                    media.Parse(true);

                    player.Play();

                    private void FrameCallback(Bitmap frame)
                       {
                           //do anything with frame, but you have to clone the frame first as its a ref and it should be dispose at the end.
                          frame.Dispose();
                       }
for above code you need Nvlc which you may Download Here
you may need to download the latest version of VLC and copy libvlccore.dll and libvlc.dll and plugin folder.

let me know if you need anything else
Nov 19, 2014 at 5:46 PM
@juliusfriedman do you know any H264 decoder/Demuxer which i can use instead of cscodec. i think cscodec is not suitable for decoding frame with this library ?
Nov 19, 2014 at 6:27 PM
Edited Nov 20, 2014 at 5:42 AM
@ julius, what did you mean about i can use silverlight? What are improvements for my issue if i use silverlight?

@silent, you could post only project without dll files ( it can be downloaded from internet) or upload to one server, I understand your idea but not really understand the logic of construction in your code so please upload full project.

The lastest EmguCV version is Emgu.CV-2.4.9 Beta, where could you download ver 3. x.xxx???
Coordinator
Nov 19, 2014 at 8:02 PM
@ Silent and Leo,
CsCodec does work but you will need to perform some work on the code base to "improve quality" which is a result of state loss in the implementation, other bugs include SPS and PPS allocation due to State Loss.

I am not going to support "cscodec" any more than I have because it's a lost cause, I will be building my own decoder soon.

Silverlight has it's own codec pipeline, so you can just use Media Element with a Custom Rendered to get the samples from the .H264 File.

You can also use Media Foundation as I have stated.

There is no other OPEN and Out of the Box solution for decoding any type of Video with .Net, this is purpose of me building this library :)

Right now I am finishing up working on Container formats which will contains all of the various logic needed to get Samples which will allow playback of files from the RtspServer, this will also pave the way for Decoding since the logic will be needed there also.

I will also need to support writing back to a file etc.

If you guys need anything else let me know!
Nov 20, 2014 at 11:57 AM
Hey Julius,

The TEARDOWN function seems not working for my application... I am using C# ASP.net (see the below code)
protected void stop_btn_Click(object sender, EventArgs args)
        {
            if (ClientThreadProc != null)
            {
                Client.SendTeardown();

                Client.Dispose();

                Client = null;

                Media.Utility.Abort(ref ClientThreadProc);

            }

        }
Coordinator
Nov 20, 2014 at 12:05 PM
Whats not working about it?

Also dispose will teardown the stream.
Nov 20, 2014 at 12:58 PM
Yep, that is what I want but It does not.. The stream and event framechanged keep running like I have not call stop_btn_Click() function.

I have checked with C# windownform or console it is working but In webform (my app is also webform ASP.net) it isn't...
Coordinator
Nov 20, 2014 at 1:03 PM
I can't really support your application.

All I can say is to be sure that the reference is the same.

Also the base media project shows an example of h264 and mpeg4 decoding with silver light.

You can use that exactly as is only taking the rtp frame rather than a ts unit.

I am going to finish mpeg containers and what not and then hopefully move into demuxer.
Marked as answer by juliusfriedman on 11/20/2014 at 6:04 AM
Nov 20, 2014 at 2:19 PM
@juliusfriedman its really good news that you will move into demuxer soon. i hope you will start decoding H.264 first.
@leoteo can you please explain what actually you want to do. all the question you asked were answered. for ASP.Net you will need to attached the process with IIS application to look after the continues stream, actually i am still not clear about your requirement. can you please provide the source URL for testing?
Nov 20, 2014 at 2:40 PM
Edited Nov 20, 2014 at 2:47 PM
@silient, my question is when i send TEARDOWN request to server, it is stoped with C# console or windowform but it doesn't with ASP.net.. I think the reason is i forgot to stop IIS server. Thanks for your suggestion!!!!


leoteo wrote:
@silent, you could post only project without dll files ( it can be downloaded from internet) or upload to one server, I understand your idea but not really understand the logic of construction in your code so please upload full project.

The lastest EmguCV version is Emgu.CV-2.4.9 Beta, where could you download ver 3. x.xxx???
Could you send me your EmguCV test project???
Nov 20, 2014 at 2:44 PM
you are welcome leo.
@juliusfriedman Boss can you please tell us any tutorial, blog, link or any help about Silverlight integration?
Coordinator
Nov 20, 2014 at 2:51 PM
Something like this https://basemedia.codeplex.com/SourceControl/latest#BMFF/SLTest/TestStreamSource.cs

Or any other sample which uses H264 Media would be compatible e.g. https://mediastreamsources.codeplex.com/

If you need anything else let me know!
Nov 20, 2014 at 4:31 PM
@leo Emgu CV ver 3.x.xx is on sourceforge.net, i think you missed the alpha release. please Download Here

i hope you will solved your issue with above, if you face any issue please let me know.
Nov 21, 2014 at 6:21 PM
Hey silient, first step i put your sample code into C#windowform, it is running very well. I have tried to keep live video around 20 minutes, it works without any problem. I plan to use Media library to keep alive stream, we can have better control than using VLC.

But when i used Emgucv in asp.net, my program is failed with error is " the opencv_core300.dll is not found". I google this error but the problem is still remaning. Have you seen tis problem? Do you know how to resolve this problem?
Coordinator
Nov 21, 2014 at 6:22 PM
Not to burst any bubbles but Emgucv uses Lib Av doesn't it? Isnt that just like using FFMPEG?
Nov 21, 2014 at 7:11 PM
Edited Nov 21, 2014 at 7:14 PM
Yep, you are right. Using ffmpeg is just temporary solution until you completed h264 decoder. Julius, Have you tried Emgucv with asp.net?
Nov 23, 2014 at 12:36 PM
@leo it seems that you missing some dependency of opencv_core300.dll, please have a look Here.
By the way why are you using EmguCV in asp.net? according to big boss you may use silverlight. for Emgucv you may use a service which should take care of all the H.264 stream on RTSP and your web application talk to that service....

@juliusfriedman we are disparately waiting for your decoder.....
Nov 23, 2014 at 1:39 PM
@Silient, I just want to try EmguCV in ASP.net for future purposes such as human tracking, ..etc. I am trying to use SharpFFMPEG (http://sharpffmpeg.sourceforge.net/) for H264 decoding, Have you try this library???
Coordinator
Nov 23, 2014 at 3:43 PM
I know, I am also eager to start.

I just have been very busy ensuring the container formats are complete before moving along.

Please keep in mind that im only one person :-D

There are already over a dozen container formats implemented here to the spec, and the decoders are at least twice as complex.

I already prematurely included the changes for h264 to get you guys going and now im a little behind.

I need to focus on finishing the container release which involves some engineering to properly split the frame classes and media classes while still keeping them used by the rtspserver.

Then the unfinished containers etc.
Marked as answer by juliusfriedman on 11/23/2014 at 8:43 AM
Nov 24, 2014 at 12:02 PM
Hey Guys, I find out that the broken portion in final image is audio stream data so If we can cut off it we can get the right image... Do you have any idea about this problem??
Coordinator
Nov 24, 2014 at 4:03 PM
How did audio data get there?
Nov 24, 2014 at 4:42 PM
I compared the decoded images by cscodec and EmguCv (ffmpeg) and i saw that the top part of image between them is same, just difference in the bottom part of cscodec. The broken portion is always in bottom... Show i think it should be some reasons behind...
Coordinator
Nov 24, 2014 at 4:52 PM
Can you show the difference?

Also cscodec is not my software so I cant support it.

If you take the results of just depacketization and use that it should be good using vlc.

This doesn't sound like a issue with my library...
Marked as answer by juliusfriedman on 11/24/2014 at 9:53 AM
Nov 24, 2014 at 4:53 PM
@leo can you share your code, i think i can fix it. i face same issue before...
Coordinator
Nov 24, 2014 at 4:55 PM
What would need to be fixed given the existing example?
Nov 24, 2014 at 5:07 PM
leoteo wrote:
@Silient, I just want to try EmguCV in ASP.net for future purposes such as human tracking, ..etc. I am trying to use SharpFFMPEG (http://sharpffmpeg.sourceforge.net/) for H264 decoding, Have you try this library???
leo first of all for human tracking you should not embed your logic in asp.net web page. how about if you deploy your logic of human tracking on server, and that server should have a web service. now whenever you need to track a human, you just need to call that webserive method which will take image as parameter and will return result in point if human detected. i think human tracking take 300 milisecond to 700 milisecond (if you use OpenCV or Emgu CV) to process the image, totally depend on the resolution of image. and camera are always on 30FPS. so i think you should have on demand logic. i already done this type of project and spend my 15 year in AI, this is what i think will better for you...
Nov 24, 2014 at 5:15 PM
@juliusfriedman Boss the issue is not in your library. actually leo is using FFMPEG, in either case like Emgu cv and SharpFFMPEG. and this is the same bug when i was decoding the stream with FFMPEG.
Nov 25, 2014 at 1:16 AM
@Julius. Silient is right, I did not mean the bug come from your library.

@Silient warrior, thanks for your suggestions!! I have tried your code with Emgu CV, the image result is good quality so i think FFMPEG can decode our depacketized frames. Therefore, I want to temporary use EMgu CV for playback the stream video but the Emgu CV need more than 700 MB for its library-> this is impossible for my ASP.net application.

Did you tried to decode with FFMPEG library?? Is that the same bug ? See my SharpFFMPEG decoding function,
 int success;
        protected void decoder_sharpffmpeg(byte[] buf)
        {
            
            FFmpeg.avcodec_init();
            FFmpeg.av_register_all();
            IntPtr codec = FFmpeg.avcodec_find_decoder(FFmpeg.CodecID.CODEC_ID_H264);
            IntPtr codecCont = FFmpeg.avcodec_alloc_context(); //AVCodecContext
            FFmpeg.avcodec_open(codecCont, codec);
            IntPtr decoded_frame = FFmpeg.avcodec_alloc_frame();

            IntPtr buffer = Marshal.AllocHGlobal(buf.Length + FFmpeg.FF_INPUT_BUFFER_PADDING_SIZE);
            for (int i = 0; i < buf.Length; i++)
                Marshal.StructureToPtr(buf[i], buffer + i, true);
            FFmpeg.avcodec_decode_video(codecCont, decoded_frame, ref success, buffer, buf.Length + FFmpeg.FF_INPUT_BUFFER_PADDING_SIZE);

            //int iStartAddress1 = frame.ToInt32();
            //decoded_frame.

        }
Nov 25, 2014 at 11:02 AM
@leo thank you, i will check it tonight and will inform you. at the mean while you may check This Blog which i think may be help full to you.
Nov 26, 2014 at 11:09 AM
Edited Nov 26, 2014 at 11:10 AM
@Julius, In case I wanna receive many streams from server together (for simple case: 2 streams), I just need to declare 2 "Client" variables and do exactly the same things as only one stream, right?? Is that affect to frame rate of streams?

@Silient, you are wellcome!
Coordinator
Nov 26, 2014 at 9:56 PM
Edited Dec 4, 2014 at 11:01 PM
@ Leo, Yes that is sufficient and no the rate of Streams is not effected by this at all.
Coordinator
Dec 4, 2014 at 11:02 PM
If you guys could try out the latest version of the code and ensure that performance is better and memory consumption is slightly reduced and that there are no anomalies / bugs I would appreciate it!

Thanks!
Dec 5, 2014 at 1:28 AM
@ Jullius, OK I will try your latest version. Since you said that you need one month for H264 decoder so we are still waiting for you ;) . Do you have any news about H264 decoder??
Coordinator
Dec 5, 2014 at 3:11 AM
Not much, I am working on tweaking some bugs in the RtpClient and RtpTools which should have been deal with what seems ages ago.

Then I still have to complete the container formats.

There is honestly not too much more to go but it has to be done, the testing will ensure that im not missing anything and I don't have to back track when the time comes.

Thanks for the help!
Dec 5, 2014 at 4:28 AM
Okay boss i will put my 500 cameras on this library again and will inform you. Leoteo is right we are really waiting for H264 decoder. once you done that i will recode the library for memory management and will share my code with you.
Coordinator
Dec 5, 2014 at 4:48 AM
Silent, recode what library for memory management?

The RtspClient / RtpClient and RtspServer should be next to exemplary in terms of performance, and memory management.

That is part of what is taking so long, the other part is working out what bugs exist in the standard / my software and correcting them going forward.

Rtcp was one major place this really needed attention e.g. with the RtcpHeader.

Another place was in the RtpDump reader and writer / RtpToolEntry which I am working on releasing tonight.

After that all I should have to do is finish up the containers and performance test and then I can move on to Demuxing which will be playback of files through the RtspServer with RR, FF Pause support.

Then after that Writing to Containers for Muxer, this will allow you to take a stream and write it to a file e.g. MP4, AVI, ASF etc. among other things such as transcode containers.

Then I can finally start my work on Decoder.

I will need to continually have performance testing and bug checks during this time to ensure that the library stays solid.

I will await your feedback of the latest release.
Marked as answer by juliusfriedman on 12/4/2014 at 9:48 PM
Dec 5, 2014 at 5:08 AM
Edited Dec 5, 2014 at 5:09 AM
@ Julius, according to your schedule the H264 decoder may complete after several month??

@Silientwarrior, How about your try with SharpFFMPEG decoder??
Coordinator
Dec 5, 2014 at 6:13 AM
Edited Dec 5, 2014 at 10:55 PM
I am not sure where that timeframe comes from, the whole library has been our for just over several months.

Demuxing support is virtually in place, as you see the sampleCount is accurately returned for all tracks in all containers except Real, and a few others. The logic just has to be created to actually get each sample and use the proper profile to Packetize it.

That shouldn't take much more one month itself to ensure is stable (including fragmented support etc.) for all containers.

Muxer will then use the Depacketize of the profiles and the appropriate writer to allow writing.

That shouldn't take much more one month itself to ensure is stable (including fragmented support etc.) for all containers.

So there we have two months before Decoder can be started worst cast scenario, please don't forget the library is free and the Holidays are approaching.

The library already gives you a way to serve streams efficiently which is more then I can say for FFMPEG / LibAV, Darwin or Wowza which use a lot more memory and CPU and can barely handle the load tests I throw at them.

I am doing my best but if people would actually contribute rather than waiting for everyone else to do work things would progress much more quickly.

I already told you Silverlight can decode and I linked you to an example where all you have to do was change the TsPacket to a RtpFrame and it would have worked in Silverlight, there is also a Javascript H264 Decoder here..

https://github.com/mbebenita/broadway

That can be used on the client side to let your clients decode each frame, just give them the bytes.

There are a bunch of other ways to decode the frames but what will that get you?

What else do you need?

I am sorry Leo but I am really confused because even if you get the decoded frames what are you going to do next? e.g. what is your goal?

Also does your camera not have a JPEG stream you can consume, either over RTP or HTTP?

If you can use that doesn't that solve your problem of needing a decoder?

If you plan on doing motion detection on MPEG compressed video you are going to realize very quickly that if you had access to the decoder code itself you could do a much quicker DSP algo when you look at the Bitstream than you can if you just Decode and compare blocks for motion.

E.g. you can take the Motion Vector and calculate if the ROI is intersected in the motion.

This saves you decoding time, comparing time and reading time.

You can object detect within the bitstream at a rate of 90% faster then using the other ways and will be your only real solution for REAL TIME analytic. (As the video has already been delayed by encoding by that time)

If your doing the way of comparing for motion using each pixel then you need an interframe anyway as the B or P frame will just reference the I frame anyway and wont be of any use unless your doing a general motion detection algorithm which just finds the position of motion and then processes further to locate blobs around the position.

Hopefully you understand that.

Maybe it's best to ask some questions in the Discussion forum and move talk there rather than the Issue page as this is really for PROBLEMS with the library either the performance or otherwise.

In short Video Compression is for Compression but doing something like motion detection / object detection has nothing to do with video compression for the most part unless your trying to take advantage of the encoding in some way to perform the algorithm (which limits the effectiveness to where you can apply the algorithm but increases speed in most cases).

You need to determine what your goal is and what your trying to do and then from there you can possibly even eliminate the need for a decoder as it sounds like some of the routines you need are from the Encoder.

Last but not least if your working on nVidia hardware you can even use the GPU to decode H.264 and some others.

http://www.codeproject.com/Articles/421869/H-CUDA-Encoder-DirectShow-Filter-in-Csharp

I know that AMD has a similar API.

http://developer.amd.com/tools-and-sdks/media-sdk/

Based on that information you should now have more then enough ways to decode but it seems like even after you do that you have a lot of work ahead of you because you still need to analyze the images for your DSP...

That is where you are still going to need AForge or another library yet again...

If your talking about adding support for Motion Detection / Blob Detection then I would then agree, it will probably be at least several months before you can expect to see anything related to such because of the amount of prior work which needs to be completed before I can even start to think about DSP in the library.

-Jay
Marked as answer by juliusfriedman on 12/4/2014 at 11:13 PM
Dec 7, 2014 at 4:56 PM
Dear Boss, i put a load of 800 cameras, the library looks very smooth. the only issue i face is if the number of cameras increase the FPS down, may be because of internet speed or something but of course its not the bug of this library. only one thing i am stuck and trying to track it down. when my solution/Software connect with the camera which is outside the local network, it download correctly. but when it connect with same network camera the stream stop after 10 minute, exactly after 10 min. don't know why. anyway till now its working perfectly fine :)
Coordinator
Dec 7, 2014 at 7:47 PM
Sounds Great, thanks for the info!

110403 contains some fixes for Jitter calculation, what version started the disconnection with streams on the local network?
Coordinator
Dec 8, 2014 at 8:25 PM
@ Silent do you have any Hik Vision cameras you could test against?

Also in your latest tests did you find any usage issue?

110445 fixes a bunch of bugs, make sure to grab the latest code!
Dec 9, 2014 at 4:05 AM
Dear Boss,
Actually i test it with old version from last Friday, i will test it again on latest code.
about HIKVision camera yes i have tons of camera online, please check below manufactures, let me know if you need any camera, i can provide administration credential/section...

Axis
Pixord
Zavio
Apexis
Dericam
FOSCAM
ISeries pnP
Planet
HIKVision
Sony
Grandstream
Vivotek
ACTi
HiStream

please note that these all camera are IP cameras...

Thanks
Marked as answer by juliusfriedman on 12/31/2014 at 2:44 PM
Coordinator
Dec 10, 2014 at 3:51 AM
Edited Dec 12, 2014 at 11:14 PM
Awesome, thank you for such vital feedback.

I am glad everything works.

How was CPU usage?

How was memory usage?

Can you test both udp and tcp protocols?

Can you compare to FFMPEG / VLC or anything else for me?

Bandwidth is nothing I can get you too much more of but, you may have better luck changing the Default Rtcp Report Interval to something like 7 seconds or longer but anything else and the SSRC may time out with the sender. (Which may help with CPU and Bandwidth but only slightly)

Thanks again!
Coordinator
Dec 12, 2014 at 11:13 PM
Hey guys, Any updates on the questions I asked above?

Hope all else is well.
Dec 14, 2014 at 4:31 AM
Edited Dec 14, 2014 at 5:09 AM
Hey julius,

The CPU and memory usage are better than before. I am going to try your library with multiple streams together and let you know my results.
Marked as answer by juliusfriedman on 12/15/2014 at 10:20 AM
Coordinator
Dec 14, 2014 at 1:03 PM
Thanks Leo!
Dec 15, 2014 at 12:39 PM
Hey Julius, I could not add your library in Silverlight, Have you tried this??
Coordinator
Dec 15, 2014 at 2:21 PM
Everything should work in silver light except the rtsp server which wouldn't be running under silver light.


The reason the rtspserver can't run is simply because of the way graphics classes have been isolated.

I will be splitting the media classes up further soon and ensuring that bitmap methods are only used in isolated assemblies and on the main ones. ( as is the case now except for rtspserver. )
besides graphics / drawing I don't think there's anything other restrictions or compatibility issues.

Why do you need to run the server under silver light, cant u just use the RtpClient under silver light and use the rtsp from the full fx?
Dec 15, 2014 at 5:17 PM
Edited Dec 15, 2014 at 5:25 PM
So you meant that i can add all of dll files except media.server dll in silverlight, right?. You are right, i just use silverlight for temporary decoder as your suggestion. But without media.server dll file could i get the raw frames?? Cuz i saw the media.server is using for RFC6184frame.
Coordinator
Dec 15, 2014 at 5:20 PM
Correct.

You can just access the RtpClient or RtpPacket s in silver light to decode.

The server isn't really required at that layer anyway.
Marked as answer by juliusfriedman on 12/15/2014 at 10:20 AM
Coordinator
Dec 19, 2014 at 10:28 PM
Hello guys,

The next release fixes a few more bugs and is more stable and better performing.

I have also updated the H264 classes to allow easier detection of nals included in the frame.

After Christmas I will probably update the API again to ensure that the Frame classes can be used from Silverlight by separating them from the MediaType / Stream classes.

Until then you can just replicate the logic to test the decoder there!

Give it a try and let me know what you think.
Marked as answer by juliusfriedman on 12/19/2014 at 3:28 PM
Coordinator
Dec 20, 2014 at 6:50 PM
Edited Dec 22, 2014 at 3:27 AM
110653 has been release and it should be the last release for a few days.

Give it a try if you get a chance and let me know if you find anything!
Marked as answer by juliusfriedman on 12/20/2014 at 11:50 AM
Dec 26, 2014 at 10:22 AM
Hello Julius,

I'm having the same issue as the original poster.
I'd like to retrieve some Bitmap objects from a H264 stream.

I copied the code from above and adapted it a bit. There were some changes in
the API since then and the original example code didn't compile anymore.
However, I still receive an EndOfStreamException during the call of frameDecoder.DecodeFrame().
I understand that you don't support cscodecs but in the discussion above I understand the original example did work.
Can you take a quick look to the code below ?

You were also saying that you were planning to implement "Decoders".
Is this some kind of a replacement of the cscodecs stuff ?
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Rtsp = Media.Rtsp;
using Rtp = Media.Rtp;
using System.Threading;
using System.IO;
using System.Drawing;
using System.ComponentModel;
using cscodec;
using cscodec.h264.player;
using System.Windows.Forms;

namespace ConsoleApplication
{
    class Program
    {
        static FrameDecoder frameDecoder = null;
        static int counter = 1;

        static void Main(string[] args)
        {
            Rtsp.RtspClient client = new Rtsp.RtspClient("rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov");

            ///The client has a Client Property which is used to access the RtpClient
            //Performs the Options, Describe, Setup and Play Request
            client.StartPlaying();

            //Attach events at the frame level
            client.Client.RtpFrameChanged +=
              new Rtp.RtpClient.RtpFrameHandler(Client_RtpFrameChanged);


            //Do something else 
            while (true)
            {
                Thread.Sleep(100);
            }

            //Send the Teardown and Goodbye
            client.StopPlaying();
        }

        static void Client_RtpFrameChanged(object sender, Rtp.RtpFrame frame)
        {
            bool bFirst = true;
            Rtp.RtpClient Client = sender as Rtp.RtpClient;

            try
            {

                var context = ((Media.Rtp.RtpClient)sender).GetContextByPayloadType(frame.PayloadTypeByte);

                if (context == null || context.MediaDescription.MediaType != Media.Sdp.MediaType.video) return;

                using (Media.Rtsp.Server.MediaTypes.RFC6184Media.RFC6184Frame hframe = new Media.Rtsp.Server.MediaTypes.RFC6184Media.RFC6184Frame(frame))
                {
                    if (frameDecoder == null)
                    {
                        Media.Sdp.SessionDescriptionLine fmtp = context.MediaDescription.FmtpLine;

                        byte[] sps = null, pps = null;

                        foreach (string p in fmtp.Parts)
                        {
                            string trim = p.Trim();
                            if (trim.StartsWith("sprop-parameter-sets=", StringComparison.InvariantCultureIgnoreCase))
                            {
                                string[] data = trim.Replace("sprop-parameter-sets=", string.Empty).Split(',');
                                sps = System.Convert.FromBase64String(data[0]);
                                pps = System.Convert.FromBase64String(data[1]);
                                break;
                            }
                        }

                        bool hasSps, hasPps, sei, slice, idr;

                        hframe.Depacketize();
                        hasSps = hframe.ContainsSPS;
                        hasPps = hframe.ContainsPPS;
                        sei = hframe.ContainsSEI;
                        slice = hframe.ContainsSlice;
                        idr = hframe.ContainsIDR;

                        using (var stream = new System.IO.MemoryStream())
                        {
                            if (!hasSps && sps != null)
                            {
                                stream.Write(new byte[] { 0x00, 0x00, 0x00, 0x01 }, 0, 4);

                                stream.Write(sps, 0, sps.Length);
                            }

                            if (!hasPps && pps != null)
                            {
                                stream.Write(new byte[] { 0x00, 0x00, 0x00, 0x01 }, 0, 4);

                                stream.Write(pps, 0, pps.Length);
                            }

                            stream.Position = 0;

                            playStream(stream);
                        }
                    }
                    else hframe.Depacketize();

                    playStream(hframe.Buffer);
                }
            }
            catch
            {
                return;
            }
        }

        public static bool playStream(Stream fin)
        {
            if (frameDecoder == null) frameDecoder = new FrameDecoder(fin);
            else frameDecoder.SetStream(fin);

            try
            {
                while (true)
                {
                    var picture = frameDecoder.DecodeFrame();       // <--- this call always generate an EndOfStreamException

                    var Width = picture.imageWidthWOEdge;
                    var Height = picture.imageHeightWOEdge;

                    Bitmap bmp = new Bitmap(FrameUtils.imageFromFrameWithoutEdges(picture, Width, Height));
                    bmp.Save(string.Format("c:\\temp\\{0:D8}.jpg", counter++), System.Drawing.Imaging.ImageFormat.Jpeg);
                }
            }
            catch (EndOfStreamException)
            {
                return false;
            }
        }

        public static void decode_stream(Stream fin)
        {
            using (var fs = new FileStream(string.Format("c:\\test2.h264"), FileMode.Append))
                fin.CopyTo(fs);

        }
    }
}
Coordinator
Dec 26, 2014 at 1:47 PM
Edited Dec 26, 2014 at 11:01 PM
Hello,

The code you posted should be fine as of the latest release, thank for updating the example :)

Yes, "Decoders" will "replace" the csCodec stuff (when I eventually get around to it).

Your probably getting the exception because the frame decoder needs also to have _hasMoreNal = true.

If you need anything else let me know!
Marked as answer by juliusfriedman on 12/26/2014 at 4:01 PM
Dec 31, 2014 at 9:23 PM
Can you post the completed code? I'm getting an EndOfStreamException "Attempted to read past the end of the stream.".
Coordinator
Dec 31, 2014 at 9:43 PM
You need to add a way to get more data to the FrameDecoder.

If you look up in the thread for SetStream you will see I described how I did it.

Also I think SilentWarrior has done some additional work to improve quality, I'm not sure if he used the latest version which was part of a psp emulator. I linked to that solution also above.
Marked as answer by juliusfriedman on 12/31/2014 at 2:43 PM
Dec 31, 2014 at 10:52 PM
I was hoping the code could be all consolidated in a single reply :)
Jan 1, 2015 at 2:00 AM
Ok, I got it working but awaiting the new decoder to give that a shot.
Jan 1, 2015 at 6:39 PM
So everything seemed to be working fine last night. The video stream was coming in nice and video was somewhat smooth. At night, my cameras turn to B&W with infrared lighting the area.

This morning, the cameras are in color and the video is sharper than at night. However, the application runs ok for a few seconds and then it starts to lag or stop and then it spurts ahead and often jumps around (even backwards and forward!!). Any clue?
Coordinator
Jan 1, 2015 at 9:19 PM
Well that's good.

Post up the code so I can take a look.

I think the problem is probably checking for IsComplete in the event handler, but we will see.

Also what protocol, tcp or udp?
Marked as answer by juliusfriedman on 1/1/2015 at 3:12 PM
Jan 2, 2015 at 3:51 PM
void InitializeVideoFeed(string sURL)
    {
        oRTSPClient = new RtspClient(sURL, RtspClient.ClientProtocolType.Tcp, Media.Rtsp.RtspMessage.MaximumLength);
        oRTSPClient.Credential = new System.Net.NetworkCredential("admin", "C0r5a1");
        oRTSPClient.AuthenticationScheme = System.Net.AuthenticationSchemes.Basic;
        oRTSPClient.OnConnect += client_OnConnect;

        //oThread = new Thread(oRTSPClient.Connect);
        oRTSPClient.OnDisconnect += client_OnDisconnect;
        oRTSPClient.OnStop += client_OnStop;
        oRTSPClient.OnResponse += client_OnResponse;


        oRTSPClient.Connect();
        bContinue = true;
    }
private void Client_RtpFrameChanged(object sender, Media.Rtp.RtpFrame frame)
    {

        bool bFirst = true;
        Media.Rtp.RtpClient Client = sender as Media.Rtp.RtpClient;
        if(frame.IsComplete)
        { 
            try
            {

                var context = ((Media.Rtp.RtpClient)sender).GetContextByPayloadType(frame.PayloadTypeByte);

                if (context == null || context.MediaDescription.MediaType != Media.Sdp.MediaType.video) return;

                using (Media.Rtsp.Server.MediaTypes.RFC6184Media.RFC6184Frame hframe = new Media.Rtsp.Server.MediaTypes.RFC6184Media.RFC6184Frame(frame))
                {
                    if (frameDecoder == null)
                    {
                        Media.Sdp.SessionDescriptionLine fmtp = context.MediaDescription.FmtpLine;

                        byte[] sps = null, pps = null;

                        foreach (string p in fmtp.Parts)
                        {
                            string trim = p.Trim();
                            if (trim.StartsWith("sprop-parameter-sets=", StringComparison.InvariantCultureIgnoreCase))
                            {
                                string[] data = trim.Replace("sprop-parameter-sets=", string.Empty).Split(',');
                                sps = System.Convert.FromBase64String(data[0]);
                                pps = System.Convert.FromBase64String(data[1]);
                                break;
                            }
                        }

                        bool hasSps, hasPps, sei, slice, idr;

                        hframe.Depacketize();
                        hasSps = hframe.ContainsSPS;
                        hasPps = hframe.ContainsPPS;
                        sei = hframe.ContainsSEI;
                        slice = hframe.ContainsSlice;
                        idr = hframe.ContainsIDR;

                        using (var stream = new System.IO.MemoryStream())
                        {
                            if (!hasSps && sps != null)
                            {
                                stream.Write(new byte[] { 0x00, 0x00, 0x00, 0x01 }, 0, 4);

                                stream.Write(sps, 0, sps.Length);
                            }

                            if (!hasPps && pps != null)
                            {
                                stream.Write(new byte[] { 0x00, 0x00, 0x00, 0x01 }, 0, 4);

                                stream.Write(pps, 0, pps.Length);
                            }

                            stream.Position = 0;

                            playStream(stream);
                        }
                    }
                    else hframe.Depacketize();

                    playStream(hframe.Buffer);
                }
            }
            catch
            {
                return;
            }
        }
    }

public bool playStream(Stream fin)
    {
        if (frameDecoder == null) 
            frameDecoder = new FrameDecoder(fin);
        else
            frameDecoder.SetStream(fin);

        try
        {


            if (bShowCamera)
            {

                var picture = frameDecoder.DecodeFrame();       

                var Width = picture.imageWidthWOEdge;
                var Height = picture.imageHeightWOEdge;

                Bitmap bmp = new Bitmap(FrameUtils.imageFromFrameWithoutEdges(picture, Width, Height));
                picSecurityCamera.Image = bmp;
                picSecurityCamera.Refresh();
            }


            return (true);
        }
        catch (EndOfStreamException)
        {
            return false;
        }
    }
Coordinator
Jan 2, 2015 at 8:07 PM
Edited Jan 2, 2015 at 8:43 PM
Well I just realized that both your and his example are the same, they both assume 'frameDecoder' being null indicates the sps or pps are not required, also what about SEI (Supplemental Encoder Information)? they also need start codes but the frame class should be adding the start code for those, the same with the SPS and PPS.

The only additional logic which needs to be accounted for is if the sps or pps are not sent in band, then they are required to be pre-pended to the decoder input.

So the logic should check for a previously written sps or pps, (which I guess in this case is signaled by the frameDecoder being NOT null).

Additionally I would base this only off the result of those properties values when Depacketize is called and only if the above is true. (previously written sps and pps)

Furthermore I would check 'hasMoreNal' to be false before modifying the stream, if there is more nal data and you perform 'SetStream' you will be truncating your own stream while decoding.

You can do this various ways...

1)

Create a Master stream.

This stream would be played by the 'FrameDecoder'

When a frame was completed I would Depacketize the frame and then write it's buffer into the Master stream without effecting the reading position of the FrameDecoder.

2) When a frame was completed I would Enque the frame into a Queue

When the EndOfStream exception occurred I would Dequeue a Frame from the buffer if available and depacketize and play it there.

3) You could replace Queue in 2) with SortedList and ensure that all frames are played in order of 'Timestamp'

Lastly and probably most importantly, can you take a small capture (less than 1 minute) of the stream playing with Wireshark and send that to me, I will check if your device is using the H.264 nals which require re-ordering by DON.

If this is the case I will have to add support for re-ordering to 'Decoding Ordering Number' in the Depacketize method, which is not hard but needs to be done anyway.

Hopefully this helps!
Marked as answer by juliusfriedman on 1/2/2015 at 1:07 PM
Coordinator
Jan 7, 2015 at 10:29 PM
Edited Jan 8, 2015 at 3:35 PM
Changeset 110760 has been released and should provide better performance and handling of prior issues.

Let me know if anyone has any further issues!
Marked as answer by juliusfriedman on 1/7/2015 at 3:29 PM
Coordinator
Jan 13, 2015 at 10:21 AM
Changeset 110760 has been released and should provide better performance and handling of prior issues.

Rtsp 2.0 / RFC2326biz is supported also.

Let me know if anyone has any further issues!
Marked as answer by juliusfriedman on 1/13/2015 at 3:21 AM
Jan 24, 2015 at 8:05 AM
hi
first of all thanks for your awesome Library
i'm trying to make an application that record h264 stream from camera and save that video to hard disk. i used above codes for that and it's working great for one of my test cameras (i have two). when i connect to cameras with vlc in the codec information window "Decoded format" is "Planar 4:2:0 YUV full scale" for working and "Planar 4:2:0 YUV" for not working. can you please help me?

thanks.
Coordinator
Jan 24, 2015 at 3:15 PM
Obviously there is a difference somewhere, its hard to say without seeing the sdp and some corresponding packets from the same session.

Some cameras or encoders streams also require some otherwise unrequired or proprietary information to be removed or converted for use with a decoder unless used with such a decoder which expects the difference and handles it as decoding is performed.

A new release will be made soon which should also make depacketization easier when a compliant media description is supplied.
Marked as answer by juliusfriedman on 1/26/2015 at 1:05 AM
Coordinator
Jan 26, 2015 at 8:05 AM
110851 has been released!

Make sure you guys give that a try!
Marked as answer by juliusfriedman on 1/26/2015 at 1:05 AM
Coordinator
Feb 5, 2015 at 10:34 PM
https://net7mma.codeplex.com/SourceControl/latest has been updated with the latest release.

There are a number of performance and API improvements.

Let me know how you like it!
Marked as answer by juliusfriedman on 2/5/2015 at 3:34 PM
Feb 7, 2015 at 4:26 PM
Can someone post complete and working code. I want to show and stream live feed from HikVision.

Thanks in advance.
Coordinator
Feb 7, 2015 at 7:01 PM
The code above does work... For any kind of camera.

Also 110910 Has been released.
Marked as answer by juliusfriedman on 2/7/2015 at 12:01 PM
Feb 18, 2015 at 4:34 PM
Hello. i've tryied the H264 with cscodec library too (i'll give the newer a try), but gave me bout 100ms for decoding (on i7 64bit in debug mode with no optimization at all), so i'm glad to be waiting your decoding implementation too.
In meanwhile we used NReco which you all may be interested into, which basically uses FFMPEG but has a quite easy interface. Unfortunately the free version doesn't come with examples so we bought them, but if anyone of you may need i can post some very short code snippets i guess. It's a very good wrapper library. it gave us about 3-4ms time (on i7 64bit debug mode no opt). hope it is ok to post this info here.

Good evening, and thank you a lot.
Giacomo
Coordinator
Feb 19, 2015 at 1:57 AM
I hope you realize the wrapper is only a small portion of the time it takes to encode and decode from one format to another.

You paid for the demo's for NReco and you want to share them here?

I don't think it's appropriate but I am not going to stop you either because I don't believe they should be charging for their wrapper anyway but that is another story.

It's fine to post whatever information you want here because in the end you must know that it doesn't help anyone.

What really helps is contributing to the library or doing things the correct way in the beginning.

Why is everyone here so interested in decoding and encoding h264? What about h265? Mpeg 1 or 2 or 4?

What about RAW?

Furthermore, What are going to do with the decoded data?

Some people here want to do DSP, some people here want to do a media player and some people don't even know what they want to do...

If everyone would make their own thread and ask their own questions and stop copying and pasting everyone else's code the level of confusion would be half as high as what it probably is now.

Silverlight has it's own decoder (even without Windows 7 I believe)

Windows 7 Ultimate has a Mpeg Decoder built in.

Direct X and Media Foundation all have the ability to register transforms.

Then you have the Non MS solutions, GStreamer, LibAv / FFMPEG. (S)MPlayer, QuickTime, Real Player... and the list goes on and on.

My library is compatible with all of them so why ANYONE needs to or wants to do the decoding on their own is again something up for question and then furthermore why your waiting for my implementation, go implement one yourself especially if time is critical.

I will be doing a managed implementation for encoding and decoding but how can you expect me to go ahead with that when I don't even have the Media Writing to files done? (Or WriteBits method)?

Anyway, in short, work on the Documentation for the library and possible include a section about working with external encoders or decoders, make your own page or blog and do that or do something else but do it in a way which it can further the project and everyone involved if possible.

Thank you.
Marked as answer by juliusfriedman on 2/18/2015 at 6:57 PM
Mar 5, 2015 at 10:17 AM
Edited Mar 5, 2015 at 10:22 AM
Hi.
I am trying to write the image to a video file using example of DavidDouglas. But i have a small problem: image is "trouble" in some time (see picture). Are you have any idea why this is happening?

https://plus.google.com/117390611892424838813/posts/SxWqpuqLaz2?pid=6122709481363289170&oid=117390611892424838813
public static Queue<MemoryStream> q = new Queue<MemoryStream>();
private static FrameDecoder frameDecoder = null;

private static void Main(string[] args)
        {
            Thread thread3 = new Thread(new ThreadStart(Rec));
            thread3.Start();

            Rtsp.RtspClient client = new Rtsp.RtspClient("rtsp", Rtsp.RtspClient.ClientProtocolType.Tcp, 8192);
            client.StartPlaying();

            client.Client.RtpFrameChanged += new Rtp.RtpClient.RtpFrameHandler(Client_RtpFrameChanged);

            while (true)
            {
                Thread.Sleep(100);
            }
            client.StopPlaying();
        }
private static void Client_RtpFrameChanged(object sender, Rtp.RtpFrame frame)
        {
            bool bFirst = true;
            Media.Rtp.RtpClient Client = sender as Media.Rtp.RtpClient;
            if (frame.IsComplete)
            {
                try
                {
                    var context = ((Media.Rtp.RtpClient)sender).GetContextByPayloadType(frame.PayloadTypeByte);
                    if (context == null || context.MediaDescription.MediaType != Media.Sdp.MediaType.video) return;
                    using ( Media.Rtsp.Server.MediaTypes.RFC6184Media.RFC6184Frame hframe = new Media.Rtsp.Server.MediaTypes.RFC6184Media.RFC6184Frame(frame))
                    {
                        if (frameDecoder == null)
                        {
                            Media.Sdp.SessionDescriptionLine fmtp = context.MediaDescription.FmtpLine;
                            byte[] sps = null, pps = null;
                            foreach (string p in fmtp.Parts)
                            {
                                string trim = p.Trim();
                                if (trim.StartsWith("sprop-parameter-sets=", StringComparison.InvariantCultureIgnoreCase))
                                {
                                    string[] data = trim.Replace("sprop-parameter-sets=", string.Empty).Split(',');
                                    sps = System.Convert.FromBase64String(data[0]);
                                    pps = System.Convert.FromBase64String(data[1]);
                                    break;
                                }
                            }
                            bool hasSps, hasPps, sei, slice, idr;
                            hframe.Depacketize();

                            hasSps = hframe.ContainsSPS;
                            hasPps = hframe.ContainsPPS;
                            sei = hframe.ContainsSEI;
                            slice = hframe.ContainsSlice;
                            idr = hframe.ContainsIDR;


                            using (var stream = new System.IO.MemoryStream())
                            {
                                if (!hasSps && sps != null)
                                {
                                    stream.Write(new byte[] { 0x00, 0x00, 0x00, 0x01 }, 0, 4);
                                    stream.Write(sps, 0, sps.Length);
                                }

                                if (!hasPps && pps != null)
                                {
                                    stream.Write(new byte[] { 0x00, 0x00, 0x00, 0x01 }, 0, 4);
                                    stream.Write(pps, 0, pps.Length);
                                }
                                stream.Position = 0;
                            }
                        }
                        else
                        {
                            hframe.Depacketize();
                        }
                        q.Enqueue(hframe.Buffer);
                        hframe.Buffer = null;
                    }
                }
                catch (Exception ex)
                {
                    return;
                }
            }
        }
public static void Rec()
        {
            FileStream fileStream;
            string filePath = @"C:\temp\1.exp";
            bool isIFrame = false;
            int countIframe = 0;
            if (new FileInfo(filePath).Exists == true)
            {
                fileStream = new FileStream(filePath, FileMode.Append, FileAccess.Write, FileShare.ReadWrite);
            }
            else
            {
                fileStream = new FileStream(filePath, FileMode.OpenOrCreate, FileAccess.ReadWrite);
            }

            while (true)
            {
                if (q.Count > 0)
                {
                    MemoryStream m = q.Dequeue();
                    byte[] array = new byte[m.Length];
                    array = m.ToArray();
                    fileStream.Write(array, 0, array.Length);
                    array = null;
                }
                else
                {
                    Thread.Sleep(50);
                }
            }
        }
Coordinator
Mar 9, 2015 at 2:36 AM
There are several reasons, without knowing how you acquired the image it is really impossible to say.

It comes down to what decoder you used and what data you gave it.

It LOOKS like the "de-blocking" filter in your decoder is having some type of issue, why it is having such an issue is again hard to say without more information.

It could be missing data or corrupt data given to the decoder or an invalid calculation of the encoded data transformation to decoded data.

You may want to ask whoever's decoder your using why this happens as they would be better suited to answer that question.
Marked as answer by juliusfriedman on 3/8/2015 at 7:36 PM
Mar 18, 2015 at 3:30 PM
Hi Julius,
thanks a lot for your library and sharing it with everybody. Similar as the other people, I am trying to fetch a H264 stream from a camera for further processing. At the current point, just trying to save it and replay it with VLC.
Previous examples showed how to do this with the RtpFrameChangedEvent, which I also tried. However, when I play the file, only a small upper part in the video is shown correctly, the rest looks like "repeated values of the last line" (used RFC6184 media type). What I noticed, the condition "frame.isComplete" is never met., which sounds odd to me.

SDP:
v=0
o=- 1231928232 1 IN IP4 192.168.1.2
s=SONY RTSP Server
c=IN IP4 0.0.0.0
t=0 0
a=range:npt=now-
m=video 0 RTP/AVP 105
a=rtpmap:105 H264/90000
a=control:video
a=framerate:12.0
a=fmtp:105 packetization-mode=1; profile-level-id=428020; sprop-parameter-sets=Z0KAINoBQBlE,aM48gA==
Do you have any idea, what's wrong with my approach? If you need additional information or the code, let me know.
Thanks in advance!
Coordinator
Mar 18, 2015 at 5:06 PM
Can't really comment on the approach without code.

Also a wireshark capture of the same connection would be helpful in determining what's received and what's not.

Post those up and I wi'll definitely take a look.
Marked as answer by juliusfriedman on 3/18/2015 at 10:06 AM
Mar 19, 2015 at 8:41 AM
Okay, you can find the wireshark dump here
Thanks for taking a look!
using System;
using System.Drawing;
using System.Text;
using System.Threading;
using System.IO;
using System.Windows.Forms;

using Media;
using Media.Rtp;
using Media.Rtsp;

namespace RTPTest
{
    public partial class Form1 : Form
    {
        public Form1()
        {
            InitializeComponent();
        }

        private RtspClient client;
        private bool bFirst = true;
        private String server;

        private void button1_Click(object sender, EventArgs e)
        {
            server = @"rtsp://192.168.1.2/media/video1";
            client = new RtspClient(server);
            client.OnConnect += client_OnConnect;
            client.Connect();
        }
        private void client_OnConnect(RtspClient sender, object args)
        {
            if (client.IsConnected)
            {

                try
                {
                    if (!sender.IsPlaying)
                    {
                        sender.OnPlay += sender_OnPlay;
                        sender.StartPlaying();

                    }
                }
                catch (Exception ex)
                {
                    Console.WriteLine(ex.Message.ToString());
                    return;
                }
            }
        }
        private void sender_OnPlay(RtspClient sender, object args)
        {
            try
            {
                if (sender.IsPlaying)
                {
                    sender.Client.FrameChangedEventsEnabled = true;
                    sender.Client.RtpFrameChanged += receivedFrame;

                }
            }
            catch (Exception ex)
            {
                Console.WriteLine(ex.Message.ToString());
            }

        }
        private void receivedFrame(object sender, Media.Rtp.RtpFrame frame)
        {

            Media.Rtp.RtpClient Client = sender as Media.Rtp.RtpClient;
            try
            {

                var context = ((Media.Rtp.RtpClient)sender).GetContextByPayloadType(frame.PayloadTypeByte);
                if (context == null || context.MediaDescription.MediaType != Media.Sdp.MediaType.video)
                {
                    Console.WriteLine("Context mismatch");
                    return;
                }
                using (Media.Rtsp.Server.MediaTypes.RFC6184Media.RFC6184Frame hframe = new Media.Rtsp.Server.MediaTypes.RFC6184Media.RFC6184Frame(frame))
                {
                    if (bFirst == true)
                    {
                        Media.Sdp.SessionDescriptionLine fmtp = context.MediaDescription.FmtpLine;

                        byte[] sps = null, pps = null;

                        foreach (string p in fmtp.Parts)
                        {
                            string trim = p.Trim();
                            if (trim.StartsWith("sprop-parameter-sets=", StringComparison.InvariantCultureIgnoreCase))
                            {
                                string[] data = trim.Replace("sprop-parameter-sets=", string.Empty).Split(',');
                                sps = System.Convert.FromBase64String(data[0]);
                                pps = System.Convert.FromBase64String(data[1]);
                                break;
                            }
                        }

                        bool hasSps, hasPps, sei, slice, idr;
                        hframe.Depacketize();
                        hasSps = hframe.ContainsSPS;
                        hasPps = hframe.ContainsPPS;
                        sei = hframe.ContainsSEI;
                        slice = hframe.ContainsSlice;
                        idr = hframe.ContainsIDR;

                        using (var stream = new System.IO.MemoryStream())
                        {
                            if (!hasSps && sps != null)
                            {
                                stream.Write(new byte[] { 0x00, 0x00, 0x00, 0x01 }, 0, 4);

                                stream.Write(sps, 0, sps.Length);
                            }

                            if (!hasPps && pps != null)
                            {
                                stream.Write(new byte[] { 0x00, 0x00, 0x00, 0x01 }, 0, 4);

                                stream.Write(pps, 0, pps.Length);
                            }
                            hframe.Buffer.CopyTo(stream);
                            stream.Position = 0;

                            saveStream(stream);
                            //decodeStream(stream);
                            bFirst = false;
                        }
                    }
                    else
                    {
                        hframe.Depacketize();
                        saveStream(hframe.Buffer);
                        hframe.Buffer = null;
                    }
                }
            }
            catch (Exception ex)
            {
                return;
            }
        }

        private void saveStream(Stream fin)
        {
            using (var fs = new FileStream(string.Format("test.h264"), FileMode.Append))
                fin.CopyTo(fs);
        }

        private void button2_Click(object sender, System.EventArgs e)
        {
            client.StopPlaying();
            client.Disconnect();
            client.Dispose();
        }
    }
}
Coordinator
Mar 19, 2015 at 1:53 PM
Try under TCP as your were using UDP.

new RtspClient(server, ClientProtocolType.Tcp)

I don't see any specific reason you didn't get completed frames, the camera does use the Marker bit correctly.

Where do you check that the frame IsComplete ?

Also is it possible to open this camera up for testing?
Marked as answer by juliusfriedman on 3/19/2015 at 7:03 AM
Coordinator
Mar 19, 2015 at 11:04 PM
I also just made another release which may solve this issue. Give it a go an let me know if you still are having trouble!
Mar 20, 2015 at 8:33 AM
If I use TCP, then the code above works Yeah I am able to save and replay using VLC. However, if I switch to UDP, then it doesn't work. Also tried the latest revision (111192) as you suggested, but still only works with TCP.
Opening the stream in VLC works with UDP, confirmed that via Wireshark. Just as an info, the camera is directly connected, so I guess there should be no dropped packages.

I added a check for "frame.isComplete" before the "try" part, please correct me if that's the wrong place. TCP continues to work, via UDP the condition is now sometimes met, but I am not able to replay the data. A lot less data comes in, when I compare how fast the recorded file grows.

Regarding the camera, it's a Sony SNC-CH240 and I don't think I can open that cam. Is there anything particular you want to know/try?
Coordinator
Mar 20, 2015 at 2:21 PM
Check the mtu settings on the camera and workstation.

Check any settings which may effect how poll would work.

http://stackoverflow.com/questions/25312169/socket-poll-has-wildly-varying-delay-on-different-machines

Just because the camera is local doesn't mean you can't drop packets or they can't be re-ordered.

Vlc is missing packets to so it sounds like a settings in your OS.

That sounds like the right place for the check. It should be one of the first checks.

If you can open the camera for testing I can try to replicate and determine what needs to be done to get all the packets. It could be socket configuration or otherwise related.

All I know is that right now I can't replicate this issue so I'm just shooting in the dark.
Marked as answer by juliusfriedman on 3/20/2015 at 7:21 AM
Mar 23, 2015 at 3:08 PM
The MTU settings for the workstation and the camera, both are set to 1500. I also checked the link you provided and indeed, the "powercfg command" resulted in a non-standard value. I killed the applications causing it, got a normal timer interval, but unfortunately the problem still remains.
Is there a way I can check if the cause for this problem is something like this polling-delay within your library or e.g. to see which packets arrived and which didn't?
How can you see, that VLC is also missing some packets?
Also checked, seems like I can't open up the camera for you, sorry. Obviously, I don't expect miracles from you without actually being able to test by yourself, but I am grateful to every idea you have :)
Coordinator
Mar 23, 2015 at 3:45 PM
Edited Mar 23, 2015 at 11:53 PM
Cool that explains the timing issue.

I read your response wrong, I thought you said VLC was missing packets too, apparently I read that wrong.

"via UDP the condition is now sometimes met"

I have no idea what condition your talking about :)

I assume you mean the property is sometimes true or sometimes false?

Please note that the library simply fires events to let you know that the frame is chaining, it will probably fire that event multiple times e.g. once when it changes and then again when the frame is complete if it was not complete the last time.

You can also check for IsMissingPackets if that is true then try with a larger receive buffer as that seems to help others, it can be specified in the constructor of the RtspClient.

Grab the latest Beta version of the code and test with that and if you are still having issues post another Wireshark capture and we will go from there.
Marked as answer by juliusfriedman on 3/23/2015 at 4:53 PM
Mar 24, 2015 at 1:19 PM
Good news, it seems to work now :)
What I have done:
  • Update to the latest version (111202)
  • Greatly increase the buffer size (32 times)
  • Only processing the package, if no packages are missing
Thank you so much :))
Coordinator
Mar 25, 2015 at 2:57 AM
Not a problem, one is glad to be of service.
Marked as answer by juliusfriedman on 3/25/2015 at 6:42 AM
Mar 25, 2015 at 1:33 PM
Okay, I also managed to use FFMPEG.AutoGen to decode the video stream afterwards. What I now realized, using UDP with this rather large buffer leads to certain time delay, which does not exist using TCP. If I decrease the buffersize e.g. to the standard size, the delay is gone, but the image becomes distorted again.
Any idea how to get a UDP reception to work without this time-delay?
Coordinator
Mar 25, 2015 at 2:34 PM
Edited Mar 25, 2015 at 5:31 PM
Delay Where?

My library CANNOT add delay beyond the initial Round Trip Time of the packet unless the Garbage Collector runs and delays a packet event, which should not occur with the default design because there aren't any allocations per packet or frame event.

With that being said if you have a very small amount of ram ( <= 64 MB) ) then it's possible that the Garbage Collector is causing a small delay and can be adjusted for if required, I will have a Micro Framework compatible solution soon which you can try if that is the case.

The main change there will be allocating the managed packet instances and then just pointing them at the new data from the events rather than creating new managed packet instances for each event.

None the less can you describe the process or submit a small code sample?

The RtpClient provides packet events directly if desired but it doesn't cause delay.

The RtspClient doesn't delay any events either as it just creates the RtpClient.

The RtspServer or ClientSession don't delay any received packets either as they just use the RtpClient.

Based in thus fact this is what I imagine you are doing [and it would be helpful if you could verify as much as ambiguity doesn't help either of us.]

You using the RtspServer with a RtspSource.

The RtspSource class WILL delay a packet by default until the entire frame arrives if using the default options, this is done to ensure re-ordering is reduced as much as possible and that only complete data reached the end user's client.

If the source doesn't set markers properly or if you want the packets to go out as soon as possible you can change this with the following logic:

'RtspClient.Client.FrameChangedEventsEnabled = false;'

After the RtspSource.Start is called and after the RtspSource.RtspClient.Client is created (RtspSource.Ready == true).

This will make the packets be sent to any ClientSession as soon as they are received from the Source but be warned that any re-ordering you experience at the server will be seen in the client and possibly much worse depending on the connection speed and bandwidth utilization of the RtspServer and that of the ClientSession.

Before you change this however I HIGHLY suggest you understand what you are doing.

The real issue is yet to be determined but it seems to be 'Delay' is appearing somewhere, based on that I would ask how you are getting data to FFMPEG and how long it is taking to decode and what you are doing in the mean time with the RtpClient?

E.g. if you are blocking during that event then how you expect packets to be received as you are waiting for the decoding to occur?

I would imagine you are introducing the delay when you are decoding the packets, e.g. you are blocking during the event which stops more data from being received and more events from being raised.

If you can submit a sample then I can help you adjust this but other than that I think you need to look at what you are doing and how you are doing it.

You should only be doing as little work as possible in event dispatched from the RtpClient's thread, you don't own that Time Slice, the instance does and it defers a portion of it (reluctantly) to allow handling of the event.

Indeed, I could have another thread which is purely responsible for raising the events but the same issue would undoubtedly happen when more then one person uses the same RtpClient and one of those two individuals holds the other up the same issue arises.

I could have a thread per event but that would very memory intensive and also require a ThreadPool implementation.

The solution is to use your OWN THREAD with your OWN Time Slice which can take as long as you like to do whatever it needs to do without blocking the RtpClient's thread for more than necessary.

This means you can quite easily derive the RtpClient and make a change to how the events are raised if desired but this will surely cause more context switching to occur and should be understood before being done.

You SHOULD NOT have to do anything like the above explained derivation except for some very specific niche cases which I do not care to explain without prior justification.

If that is in any way unclear let me know and I will clarify for you.

In short, I do not believe at this time that this is an issue with my library, but quite simply with your integration of my library and ff-mpeg.

I can't really do your work for you but I will give you advice and help you with code if you post examples.
Marked as answer by juliusfriedman on 3/25/2015 at 7:34 AM
Coordinator
Mar 26, 2015 at 12:57 AM
I have added 'PerPacket' option which may help you.

If you could please let me know if this meets your needs I would appreciate it.

Also if you could create a new thread I would appreciate it as I will be removing older threads soon.

Thanks!
Marked as answer by juliusfriedman on 3/25/2015 at 5:57 PM
Apr 13, 2015 at 11:06 AM
Julius, sorry for not coming back you earlier, I was sick and then codeplex had some technical problems. Sorry, if I chose my words not carefully enough before, but I never wanted to say that your library is the reason for that delay. You were right with threads, I tried to implement and it's now working as supposed. Thank you so much for the push in the right direction :)
Coordinator
Apr 13, 2015 at 12:23 PM
Not a problem, one is glad to be of service.
Marked as answer by juliusfriedman on 4/13/2015 at 5:23 AM
May 19, 2015 at 4:00 PM
Edited May 19, 2015 at 4:01 PM
Hello juliusfriedman,
can you tell me please about the H.264 decoding? is it complete now?

Thanks
Coordinator
May 19, 2015 at 10:44 PM
Hello silentwarrior,

What I can tell you about the H.264 decoding is not existential to the project at the current time.

Essentially.
Marked as answer by juliusfriedman on 5/19/2015 at 3:44 PM
Aug 11, 2015 at 10:28 PM
Edited Aug 11, 2015 at 10:37 PM
Hi,

I've tested the code above to save a h264 raw stream from my security camera using version 111212.

When playing with smplayer or vlc , the video lags (due to frame error during record ?) and just after 2 seconds, the image is moving to the bottom frame-by-frame (weird effect...). I've checked my network, configuration, camera but everything seems OK.

Finally, i've downgraded to the version 111192 and it works perfectly (just one incomplete frame in 30sec instead of 6 every second for the last version).

So, are you still working on this beta 111212 or i am the only one with this problem ?

At last, i want to thank you for the amount of work you've done. All this stuff in pure C#, impressive !
Coordinator
Aug 21, 2015 at 7:52 PM
Hard to say, if you have better luck with the non beta version use that.
Marked as answer by juliusfriedman on 8/21/2015 at 12:52 PM
Nov 18, 2015 at 1:20 PM
Hello together,
i have read all your posts and test some code to get my own rtsp client viewer.

Now i have a problem with the function: frameDecoder.SetStream(fin);
I have doanload cscodec and compile it. But there is no function SetStream in FrameDecoder.cs. Do i have a wrong version of the cscodec libary?
Coordinator
Nov 18, 2015 at 1:49 PM
Edited Nov 18, 2015 at 1:49 PM
SetStream was added by myself to allow for a different stream to be passed to the decoder, Here is an example of it:
public void SetStream(System.IO.Stream stream)
        {
            if (!hasMoreNAL)
            {

                fin = stream;
                hasMoreNAL = true;
            }
            else
            {
                long position = fin.Position;

                fin.Seek(0, SeekOrigin.End);

                stream.CopyTo(fin);

                fin.Seek(position, SeekOrigin.Begin);

                hasMoreNAL = true;
            }
        }
Marked as answer by juliusfriedman on 11/18/2015 at 6:49 AM
Nov 18, 2015 at 1:56 PM
Ok, then i can't find this function :-) Many Thanks.
Coordinator
Nov 18, 2015 at 2:16 PM
The function is above, it goes in the FrameDecoder class.
Marked as answer by juliusfriedman on 11/18/2015 at 7:16 AM
Nov 18, 2015 at 2:26 PM
Yes i know. I have integrate it in the code and now i have no compiler error.
I must look at the functions because i got only x frames a valid pictures.

Is there a great different between der RTSPClient Class itself and teh RTSPCliebt usage in RTSPServer?
The same cameras that has the source not ready problem in ma server test application are running fine with the RTSPClient Demo application?
Coordinator
Nov 18, 2015 at 6:11 PM
Sorry but I couldn't really understand what you were trying to get across.

The RtspClient class is not used in the RtspServer yet, although it will probably be in the future.

The RtspSource uses the RtspClient class, if your RtspClient demo application works then there it seems the problem is with your RtspServer application somewhere.
Marked as answer by juliusfriedman on 11/18/2015 at 11:11 AM
Feb 19, 2016 at 9:23 AM
Edited Feb 19, 2016 at 9:23 AM
Hello, I am trying to follow the code snippets provided in this thread in order to understand how the library works.

Unfortunately, I get compiler errors. it seems that the API has changed since these examples where posted?

Is there a current example with code that works with the current version of the library?

What I am trying to do is simple:
  • initialize RTSPclient with my URL
  • get payload data from the stream
  • decode this data with H.264 codec
  • Display my video on the Screen.
I have looked through the forums but all the examples are fragmentary or outdated. So I am a bit lost here...
Coordinator
Feb 19, 2016 at 11:24 AM
The display will be handled by a decoder, the people in this thread are using an external project to do the decoding called Cscodec.

The UnitTest project has such an example in UnitTest/Program.cs @ TestRFC6184Frame.

If you need anything else let me know.
Marked as answer by juliusfriedman on 2/19/2016 at 4:24 AM
May 30, 2016 at 2:52 AM
Hello,
I download the latest code and build.
I am stuck in grabbing frame from LIVE RTSP stream. Media format is 96 and i can get each frame in RtpFrameChanged event, i am using RFC6184Stream and Depacketize it and i save each frame in separate file. but ffplay can not replay the file.

same code In release 110005 and older it work very well,but newer release is not work.

why?

e:\green softs\ffmpeg-20160105-git-68eb208-win64-static\bin>ffplay e:\test2.h264
ffplay version N-77704-g68eb208 Copyright (c) 2003-2016 the FFmpeg developers
built with gcc 5.2.0 (GCC)
configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libdcadec --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib
libavutil 55. 12.100 / 55. 12.100
libavcodec 57. 21.100 / 57. 21.100
libavformat 57. 21.100 / 57. 21.100
libavdevice 57. 0.100 / 57. 0.100
libavfilter 6. 23.100 / 6. 23.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
[h264 @ 0000019dc0eb2260] Format h264 detected only with low score of 1, misdetection possible!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 2 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced 0B f=0/0
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 2 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced 0B f=0/0
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 2 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced 0B f=0/0
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 2 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
[h264 @ 0000019dc0eb33a0] no frame!
[h264 @ 0000019dc0eb33a0] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 0000019dc0eb33a0] decode_slice_header error
Coordinator
May 30, 2016 at 4:01 AM
Please start your own thread and I will help you.

Please include a sample application to show how your making the file.

Please also include a Wireshark dump.

If possible include a sample which was made from a version that works and then again on a version which does not work so comparison is easy.

Thank you!
Marked as answer by juliusfriedman on 5/29/2016 at 9:01 PM