I have an AXIS IP camera (M1054) which sends an H264/RTP stream via RTSP.
Unfortunately, they do not send SPS and PPS NALUs at all, they only transfer (fragmented) Codec slices.
I'm trying to decode ...

I want to use h264_videotoolbox encoder but i find that i couldn't change bit rate dynamically.
FFmpeg copies bit rate number and set that to toolbox session in init process, i think.
bit_rate_num = ...

I want to use h264_videotoolbox codec to encode my video in FFMPEG lib.I input image format is AV_PIX_FMT_YUV420P, but output video frame like follow.
I tried libx264 and all work correctly.
Do you ...

I have been working on an H264 hardware accelerated encoder implementation using VideoToolbox's VTCompressionSession for a while now, and a consistent problem has been the unreliable bitrate coming ...

I am trying to test if it is possible to use HW docoding for realtime communication, say 100ms for 720p client to client delay, 40ms for system delay and 60ms for network, iOS with VT is already OK.
...

I've written a screen-recording app that writes out H.264 movie files using VideoToolbox and AVWriter. The colors in the recorded files are a bit dull compared to the original screen. I know that this ...

I want to start working on a low level video player, at this stage, Iam able to download the videos from the server cache them, play them with AVPlayer.
what I want to do is to remove the AVPlayer, ...

I am currently able to use Apple's VideoToolbox to compress video from my phone's camera, get nal units, and save the data into an .h264 file. The idea is to eventually do live streaming. However, I'd ...

I'm making an iOS app that decodes an h264 stream using video-toolbox. I create the stream with ffmpeg on a PC and send it to an iPhone using RTP. It's working nicely when I use this command to create ...

I'm taking a CVImageBufferRef from the camera output, then converting it to a CGImageRef with VideoToolbox, then converting that to a UIImage.
Weird thing is.... when I check the size of the UIImage, ...

I am using VTCompressionSessionEncodeFrameWithOutputHandler to compress pixel buffers from camera into raw h264 stream. I am using kVTEncodeFrameOptionKey_ForceKeyFrame to be sure that every output ...

I am able to compress video captured from camera device to h264 format using video toolbox framework, but when I tried to play that h264 file in VLC player I am not able to hear the audio of the video....

My goal is to mirror the screen of an iDevice to OSX, as lag-free as possible.
To my knowledge there are 2 ways to this:
Airplay Mirroring (e.g. Reflector)
CoreMediaIO via Lightning (e.g. Quicktime ...

Guys!
I have found a demo in github that is :-VideoToolboxDemo. And I also found a question in stackoverflow how-to-use-videotoolbox-to-decompress-h-264-video-stream which someone has implemented in ...

In my situation, iOS HW encoder sometimes generates one nalu and other times generates two nalus which make up the access-unit/picture. When two nalus were generated for one picture, I combine them ...

In my work, I try to use iOS VideoToolBox to encode my video from capture. It worked . But when I try to set the fps, it can not work, the video encode stream's fps always is 30.
In my code, I used ...

I've been struggling with AVSampleBufferDisplayLayer being really choppy with a lot of motion. When there was motion in my live stream, it would be come pixelated and half frozen with multiple frames ...