Oct 21, 2015 at 7:01 PM
Edited Oct 21, 2015 at 7:12 PM
Audio is supported if the source stream has audio, displaying it or rather playing it is going to depend on the codec involved.
If the source data is just raw PCM you would just need to generate a WAVE header and then append the data, the result can be played with the System.Media.SoundPlayer.
Using MediaFoundation you could take other types of supported formats besides PCM.
In the future I possibly will create an example which sends pre-defined sounds from the System.Media namespace.
There is no packetization in the profile, see
although again that depends on the codec.
If the codec is PCM you would just basically divide up your sound without the wave header into packets, then the resulting data on the otherside would be received and wrapped with a WAVE header, the information in the WAVE header such as playrate would come
from the SDP.
There are various examples of some types which do have packetization in the RtspServer.MediaTypes folder.
That's the high level, let me know once you get some code down of your own if your still having trouble and I will help you some more from there.