Wyze have misused industry standard terms. SD and HD specifically. Why else would Wyze have felt the need to post a separate FAQ on what SD and HD means? Albeit misusing industry standard terms and mis-educating readers on terminology.

SD means 480p not 1080p at a lower bitrate. HD means 720p not 1080p at a higher bitrate. FHD means 1080p. QHD means 1440p. UHD (4K) means 2160p. I recommend the vertical resolution be used for naming just like with 360p.

The spec page should also include the encoder bitrate for each resolution, e.g. 0.5Mbps. It is also possible to specify the sensor capability. E.g. if the native sensor resolution is 1080x1920 (16:9) then it’s a 2 megapixel sensor. I encourage any manufacturer to stick to industry standard terms and provide full details.

If Wyze needs a better term for 1080p low bitrate (bandwidth) then name it exactly that, “1080p (low)”.

HD stands for high-definition video and SD stands for standard-definition video. The HD/SD setting does not impact the image quality of the video. All videos are recorded in 1080p. Instead, HD/SD controls the compression levels of the videos for storage.

then it says,

Unlike HD/SD, 360p will impact the image quality and lower the resolution of the live stream to 480x360. All videos are also recorded in 360p, including local recording onto a microSD card.

Nowhere have I found how many days of storage at 360p. HD/SD are quoted, but not 360p. I also find it strange that if I’m having a day with wifi interference and the app suggest I downgrade to 360p, isn’t this also changing the storage to the SD card? Is this correct? Why change storage because of wifi interference?

I do agree that once words have taken on meaning, whether codified in standards or established through repeated use, it is generally good to respect those.

But I also want to commend Wyze on it’s openness. A lot of companies would not care to respond or, if they did, would provide cheap marketing copy. Instead, Wyze has provided real, detailed information. This kind of responsiveness and accountability should be recognized and ultimately rewarded in the marketplace. Thanks.

When it comes to codecs, the word codec means compression. MPEG-2, MPEG-4 part 10 (H.264) and H,265 are all compression based codecs, The notion of codec efficiency means that for a given bitrate, one codec has fewer compression artifacts than another. The magic of a codec is so clever and is a collection of rules that describe a frame as well as rules that describe how the next frame differs from the previous one. The algorithms are amazing and each successive codec have more and more of these built in which is why they continue to get better.

Codec development, storage media development and display resolution development loosely go hand in had. That’s why DVD used MPEG2 at 480p for the current display standard, Blu-ray used H,264 at 1080p for the current display standard and 4k Blu-ray used H.265 at 2160p for the current display standard. Codec efficiency has improved over time. For DVD, Blu-ray and 4k Blu-ray, the codecs were tuned to the maximum bitrate that the storage media was able to store in order to ensure there were no compression artifacts. i.e. visible loss of image quality due to compression by the codec.

The point being is that the term ‘compression’ does not mean image quality. Compression is a codec selection decision and that alone. Wyze cameras all use h.264 compression.

For the various resolutions and for a given codec, bitrate tuning is the key. This is what determines the degree to which compression artifacts are visible (image quality). Too low a bitrate and the image quality suffers e.g. 15FPS. H.264 at 150Kbps… 360p vs 1080p image quality (compression artifacts) is devastatingly different. That’s because there are 9 times more pixels in a 1080p image than a 360p image, therefore the bitrate needs to be higher for 1080p. You can argue that 1080p at 9x150=1350Kbps, 15FPS would be an equivalently tuned bitrate to 360p at 150kbps 15FPS. Selection of resolution is generally driven by the capabilities of the display. Case in point… YouTube. When viewing full screen, you wouldn’t select 4K if your display is 1080p, there is no point. Similarly, when viewing in a small window, there is not point choosing 1080p on a 1080p display.

With IP cameras the bitrate of the stream is relevant for a few things:

required bandwidth for viewing live or recorded stream

required storage for continuous recording to storage media

compression artifacts (perceived image quality)

display size when viewed

connection between devices as it pertains to maximum supported bandwidth

Note that bitrate and bandwidth use the same unit of measure, bits per second.

I did a post that has an analysis of this.

Wyze have ignored industry standard terminology with their use of SD and HD and it is this alone that is the basis for so much confusion on this topic. My ask to the Wyze team is that they update their terminology to align with industry standards.

Thanks. So from a user functionality perspective, having a bad Wifi day and switching my live-stream or playback down to 360p also changes what is being recorded to the SD card, until I finish viewing or get a better signal and switch it back to SD or HD?

Or would a “No compression” option still be on the table with the current hardware? I understand that would eat up the storage on the card pretty quickly without compression. Sorry if these are basic questions, I’m just trying to connect the dots of why Wifi interference or a weak cellular signal is also forcing a change to my stored video quality and how all that fits in with your #roadmap.

Correct, if the hardware only supports single stream, then it would not be possible to have separate endoding profiles (not compression as compression means codec which is h.264) for livestream vs local storage. No codec (compression) means RAW (true lossless) and 1080p 15 FPS requires 5,879 Mbps (60 Terabytes storage per day). This is why we want a good codec which provides compression (lossy).

Weak Wifi signal, busy wifi, or limited internet bandwidth (ISP link, cellular signal strength, busy cellular tower, differing cellular data connections 2G, vs 3G vs HSPA+, vs LTE) limits data throughput (measured in Mbps). Because the device seems to only support single stream (H,264 encoding using one of three profiles, which Wyze call 360p, SD, HD), the software works to ensure you don’t have laggy live video. It changes the encoding profile to one that consumes less bandwidth.

For the overarching topic, I’m not involved enough in the tech to speak to this one so I’ll see if I can get people from the dev side in here. Thanks for bringing it up! But please keep this conversation kind. It’s an important one and we would rather not have to close it or take other actions that would limit the discussion.

Nice cam. I have the pan. Seems video isn’t very high quality. Is that the hardware or software issue? I have plenty of bandwidth both on phone and home net. I see plenty of YouTubes, can we get better bitrates and fps ?