Sunday, August 30, 2009

MOUNTAIN VIEW, Calif. — (BUSINESS WIRE) — August 27, 2009 — Synfora, Inc., the premier provider of algorithmic synthesis tools for integrated circuit and system designers of large, complex processing applications, today announced that it has purchased Esterel Studio™, a tool suite developed by Esterel EDA Technologies. Esterel Studio is based on the Esterel synchronous programming language in use for the design of control logic and bus systems by three of the top 10 semiconductor companies in system-on-chip (SoC) designs.

“Esterel Studio is complementary to the PICO algorithmic synthesis platform and was already part of an integrated flow used by several of our customers,” said Synfora CTO Vinod Kathail. “This step is a part of our long-term vision of providing integrated solutions for application accelerators and more control-oriented IP.”

Esterel Studio is primarily used to design control-intensive silicon intellectual property (IP) blocks and complex reactive systems such as control circuits, embedded systems, human-machine interface and communication protocols. Companies such as STMicroelectronics, Texas Instruments, NXP and Intel have used the Esterel programming language for more than 50 production designs.

Esterel Studio supports a complete flow from design to verification and supports textual or graphical design of large state machines with arbitrary embedded data path, animated simulation and debugging. Esterel studio is able to generate either HDL (Verilog, VHDL) code or C / SystemC models from the same source code, which ensures that the models used in virtual platforms for software validation agree with the final hardware design. Esterel Studio also supports formal verification of the produced results, a critical capability for complex control-oriented designs. In conjunction with the PICO platform, this will provide Synfora customers with an integrated design environment for the development of both control-intensive and algorithmic-intensive blocks.

Impulse CoDeveloper™ includes the Impulse C™ software-to-hardware compiler, interactive parallel optimizer, and Platform Support Packages supporting a wide range of FPGA-based systems. Impulse tools are compatible with all popular FPGA platforms.

Impulse CoDeveloper™ is a hardware/software design tool that allows the C language to be used to develop applications for mixed hardware/software platforms including Xilinx MicroBlaze-based Virtex-II or Spartan FPGAs. The Impulse C libraries and compiler tools support multiple programming models, including streams-based programming, allowing a software programmer to make use of available FPGA resources for hardware coprocessing without the need to write low-level hardware descriptions. The compiler tools included with CoDeveloper provide the necessary C to RTL compilation path, as well as providing automated generation of software/hardware interfaces that are specifically optimized for the MicroBlaze processor platforms. This capability makes it possible for an application developer to create a complete hardware/software application with no need to write VHDL or Verilog code. Instead, the CoDeveloper tools create the necessary low-level hardware and software descriptions (in the form of HDL outputs and automatically-generated software libraries) which can then be imported directly into the Xilinx tools for hardware synthesis and implementation. CoDeveloper HDL outputs are also fully compatible with third-party synthesis tools including those available from Xilinx, Synplicity and Mentor.

Impulse C to RTL Flow

Write or import ANSI C designs using Impulse MicroBlaze libraries.

Compile and debug within CodeWarrior™, GCC™, Visual Studio™, or other.

Use CoDeveloper and Impulse C to improve parallelism and identify performance bottlenecks.

Saturday, August 29, 2009

Design teams are under growing pressure to create faster, cheaper, better products. Finding the most efficient implementation of complex algorithms in silicon is critical to success, but can be time-consuming and expensive. Algorithmic synthesis allows design teams to work at higher levels of abstraction, decreasing the time, cost and risk of designing a complex FPGA.

Only the application engine defines and differentiates the FPGA’s functionality. An application engine is an efficient implementation of the algorithm that meets the PPA (Power, Performance, Area) targets. It also provides standard interfaces (streaming data, local memory and bus interface) to ensure easy integration into the FPGA. The application engine is critical for differentiating the end product. It can change rapidly, and the bulk of engineering effort is spent on its implementation. Algorithmic synthesis makes the design process dramatically quicker at a lower cost by creating hardware application engines that “drop” into the rest of the FPGA.

An FPGA comprises application engines (video CODEC, wireless MODEM), embedded processor, connectivity and control IP and memory. It is the application engines that define and differentiate the end product

About the PICO platform The PICO platform comprises tools and IP, built around an advanced compiler technology based on over a decade of research at HP Labs. The compiler finds and exploits parallelism at multiple levels and then creates efficient hardware to meet the chosen performance target.

PICO Extreme FPGA brings all the benefits of PICO Express FPGA but also introduces TCABs, a major technological innovation. TCABs enable a recursive system composition (building blocks within building blocks) that allows more intuitive coding, better results and faster runtimes. PICO Extreme algorithmic synthesis enables the implementation of dramatically larger and more complex sub-systems.

Monday, August 24, 2009

Something I end up explaining relatively often has to do with all the various ways you can stream video encapsulated in the Real Time Protocol, or RTP, and still claim to be standards compliant.

Some background: RTP is used primarily to stream either H.264 or MPEG-4 video. RTP is a system protocol that provides mechanisms to synchronize the presentation different streams – for instance audio and video. As such, it performs some of the same functions as an MPEG-2 transport or program stream.

RTP – which you can read about in great detail in RFC 3550 – is codec-agnostic. This means it is possible to carry a large number of codec types inside RTP; for each protocol, the IETF defines an RTP profile that specifies any codec-specific details of mapping data from the codec into RTP packets. Profiles are defined for H.264, MPEG-4 video and audio, and many more. Even VC-1 – the “standardized” form of Windows Media Video – has an RTP profile.

In my opinion, the standards are a mess in this area. It should be possible to meet all the various requirements on streaming video with one or at most two different methods for streaming. But ultimately standards bodies are committees: Each person puts in a pretty color, and the result comes out grey.

In fact, the original standards situation around MPEG-4 video was so confused that a group of large companies formed the Internet Streaming Media Alliance, or ISMA. ISMA’s role is basically to wade into all the different options presented in the standards and create a meta-standard – currently ISMA 2.0 – that ties a number of other standards documents together and tells you how to build a working system that will interoperate with other systems.

In any event, there are a number of predominant ways to send MPEG-4 or H.264 video using RTP, all of which follow some relevant standards. If you’re writing a decoder, you’ll normally need to address all of them, so here’s a quick overview.

Multicast delivery: RTP over UDP

In an environment where there is one source of a video stream and many viewers, ideally each frame of video and audio would only transit the network once. This is how multicast delivery works. In a multicast network, each viewer must retrieve an SDP file through some unspecified mechanism, which in practice is usually HTTP. Once retrieved, the SDP file gives enough information for the viewer to find the multicast streams on the network and begin playback.

In the Multicast delivery scenario, each individual stream is sent on a pair of different UDP ports – one for data and the second for the related Real Time Control Protocol or RTCP. That means for a video program consisting of a video stream and two audio streams, you’ll actually see packets being delivered to six UDP ports:

Video data delivered over RTP

The related RTCP port for the video stream

Primary audio data delivered over RTP

The related RTCP port for the primary audio stream

Secondary audio data delivered over RTP

The related RTCP port for the secondary audio stream

Timestamps in the RTP headers can be used to synchronize the presentation of the various streams.

As a side note, RTCP is almost vestigial for most applications. It’s specified in RFC 3550 along with RTP. If you’re implementing a decoder you’ll need to listen on the RTCP ports, but you can almost ignore any data sent to you. The exceptions are the sender report, which you’ll need in order to match up the timestamps between the streams, and the BYE, which some sources will send as they tear down a stream.

Multicast video delivery works best for live content. Because each viewer is viewing the same stream, it’s not possible for individual viewers to be able to pause, seek, rewind or fast-forward the stream.

Unicast delivery: RTP over UDP

It’s also possible to send unicast video over UDP, with one copy of the video transiting the network for each client. Unicast delivery can be used for both live and stored content. In the stored content case, additional control commands can be used to pause, seek, and enter fast forward and rewind modes.

Normally in this case, the player first establishes a control connection to a server using the Real Time Streaming Protocol, or RTSP. In theory RTSP can be used over either UDP or TCP, but in practice it is almost always used over TCP.

The player is normally started with an rtsp:// URL, and this causes it to connect over TCP to the RTSP server. After some back-and-forth between the player and the RTSP server, during which the server sends the client an SDP file describing the stream, the server begins sending video to the client over UDP. As with the multicast delivery case, a pair of UDP ports is used for each of the elementary streams.

For seekable streams, once the video is playing, the player has additional control using RTSP: It can cause playback to pause, or seek to a different position, or enter fast forward or rewind mode.

RTSP Interleaved mode: RTP and RTSP over TCP

I’m not a fan of streaming video over TCP. In the event a packet is lost in the network, it’s usually worse to wait for a retransmission (which is what happens with TCP’s guaranteed delivery) than it is just to allow the resulting video glitch to pass through to the user (which is what happens with UDP).

However, there are a handful of different networking configurations that would block UDP video; in particular, firewalls historically have interacted badly with the two modes of UDP delivery summarized above.

So the RTSP RFC, in section 10.12, briefly outlines a mode of interleaving the RTP and RTCP packets onto the existing TCP connection being used for RTSP. Each RTP and RTCP packet is given a four-byte prefix and dropped onto the TCP stream. The result is that the player connects to the RTSP server, and all communication flows over a single TCP connection between the two.

HTTP Tunneled mode: RTP and RTSP over HTTP over TCP

You would think RTSP Interleaved mode, being designed to transmit video across firewalls, would be the end, but it turns out that many firewalls aren’t configured to allow connections to TCP port 554, the well-known port for an RTSP server.

So Apple invented a method of mapping the entire RTSP Interleaved communication on top of HTTP, meaning the video ultimately flows across TCP port 80. To my knowledge, this HTTP Tunneled mode is not standardized in any official RFC, but it’s so widely implemented that it has become a de-facto standard.

Which will be the best?

A lot of customers, but also manufacturers ask me “Within your expertise, which brand is the best?”

The first time this question was asked to me I thought, “Cool… I never looked at it this way”

In my opinioun there is no best brand or let’s say a winner. As discussed in my other article, we need to deliver solutions which fit the markets (customers) needs. Of course a specific brand could better fit these needs then an other brand but still that doesn’t mean that it is the best brand.

Video Content Analysis based systems are mostly developped within a specific market segment. Face recognition, traffic control or for example smoke detection are very different types of situations and because of that, these systems needs to be filled out with specially for these situations developped solutions.

You can imagine that for me it is not possible to answer the question. It is a pitty to see that most of the people are dissapointed by that. In their opinion, I as an expert have to answer it. If I can’t answer it,… what does it mean then?

The fact that answering this question isn’t possible at this moment just shows us how broad the term VCA or IVA is used. Splitting up the market into various segments would make the question more simple to answer but also then I am convinced by the fact that there are no really bad brands but that it just depends on the combination of the right brand used in the right situation.

As I mentioned before,… we have to take care of managing the customers level of expectation.

Norbert - I agree with your summary on the brand winner. Just as there are many computer software winners, each for its particular application, so it is with video analytics. People understand they cannot buy any camera -each camera is good for a particular purpose so it is with video analytics packages. Also video analytics is part of a system so its capabilities are in tandum with the system. I do not think anyone can say with security which brand is best, even more so it is with analytics.Dick Salzman, CPPKeeneodick.salzman@keeneo.com

Norbert, I agree with your last two comments about customer expectation and using the right software in the right situation. VCA adds extra value to camerasystems and can let operators work more effective, but it still has some limitations also. Most of the wellknown VCA companies work at quite the same level so it's hard to say who is the best company.Regards,Matthijs Vrisekoopvrisekoop@hillson.nl

Further to Dick’s comment about “each camera is good for a specific purpose” we need to understand that any Analytics software can only be as good as the raw date provided (in this case Video). Often the Analytics is added as an “afterthought” to an existing system and even when this is not the case it is uncommon to find detailed specifications as to what is expected from each camera in a system. A detailed specification and survey must be provided before any Analytics system is designed. Camera type, location, lighting (and other environmental considerations) must be considered in the basic design. If we do not clearly define what we expect from each camera (or group of cameras) then the chances of meeting expectations is slim. Systems often fail because these issues are not addressed in the basic design. Often, the “Best System” will be the system installed by the best integrator, who understands the system requirements and plans accordingly.

Sunday, August 9, 2009

They are three best mixed language simulators. VCS (Verilog Compiler Simulator), formerly from Chronologic but now owned by Synopsys, IUS (Cadence Incisive Unified Simulator) from Cadence, and ModelSim from Mentor Graphics. The answers to which is the best vary by project. Usually VCS and IUS are faster than Modelsim, and Modelsim uses more memory.

Saturday, August 8, 2009

"Formality is an application that uses formal techniques to prove or disprove the functional equivalence of two designs or two technology libraries. For example, you can use Formality to compare a gate-level netlist to its register transfer level (RTL) source or to a modified version of that gate-level netlist. After the comparison, Formality reports whether the two designs or technology libraries are functionally equivalent. The Formality tool can significantly reduce

your design cycle by providing an alternative to simulation for regression testing.

The techniques Formality uses are static and do not require simulation vectors. Consequently, for design verification you only need to provide a functionally correct, or “golden,” design (called the reference design) and a modified version of the design (called the implementation design). By comparing the implementation design against the reference design, you can determine whether they are functionally equivalent to each other. Technology library verification is similar except that each cell in the implementation library is compared against each cell in the reference library one cell at a time."

How to run the 1st tutorial example?

The following steps for command line. However, fm_shell -gui may be better for your experiment. Assume you are installing Formality (fm) in /tools/synopsys/fm/C-200906_sp1 and Design Compiler (DC or syn) in /tools/synopsys/syn/C-200906-SP1, then

Thursday, August 6, 2009

I just read this report of a new IP security vulnerability being demonstrated today at the DefCon hacker’s conference in Las Vegas. The new hack has two components:

The attackers are able to view video being streamed across a network, and

The attackers are able to use a man-in-the-middle attack to insert video controlled by the attacker to a video decoder somewhere on the network.

The linked video shows viscerally how an attacker could foil a security/surveillance video system – a modern-day Thomas Crown Affair. But the underlying problem goes beyond the surveillance market and could conceivably affect a wide range of industries using video over IP. This is a big deal, and vendors of any form of network-connected IP video device – whether a camera, encoder, or decoder – should take note.

In fact, the security researchers who are demonstrating the hack are also helpfully releasing open source software to exploit the vulnerability. So what started out as a vulnerability that was only open to bad guys with a reasonably deep technical understanding has just become widely accessible. Thanks, guys.

At Cardinal Peak, we’ve built a large number of these systems, so I feel like I have a relatively good understanding of why vendors of IP video solutions are doing what they are. It’s all about cost: Today most IP video is not encrypted when it is transmitted across the network. That’s bad. (What’s even worse, many products’ user interfaces offer faux security options, like bogus “password-protection,” that might lead enterprise customers to think they’ve got more security than they do.)

The reason that video is sent unencrypted is a corollary to the First Law of Video:

Video – even video compressed using state-of-the-art codecs like H.264 – is BIG.

It takes a lot of bits to send motion imagery across a network. If you want to encrypt that video, you’ll have to encrypt those bits. Encrypting a lot of bits consumes nontrivial computing power – which means you either need a beefier CPU in your embedded video encoder device, or you need dedicated hardware like an FPGA. Either way, adding encryption to your product is going to add to your cost of goods sold.

But wait, it’s worse. Adding more processing power to an embedded device means more power to dissipate, which increases the need for moving parts like fans which lower reliability. So in addition to cost, there is complexity, reliability, and power dissipation.

Even if you somehow get around that, there are more problems. To display the video, you still need to decrypt it, which means you’re going to consume CPU power on the decode side, as well. On modern computers, that probably isn’t a huge problem if all you want to do is display video from a single camera. On the other hand, if you’re trying to display a 16-up display of live video from 16 cameras – well, time to buy some more Intel stock.

And finally: Adding security features to a system are always at cross-purposes with making that system easy to use. So solving this problem places a burden on every system integrator and IT administrator.

What a pain!

For standards-based MPEG-4 or H.264 systems, there is a standard called Secure RTP (with the associated SRTP RFC if you’re looking for some light reading) that, if implemented widely, would basically prevent the hack. Unfortunately, as far as I’m aware, very few encoders, decoders, or network recorders implement SRTP. That may be about to change, assuming news of the hack causes customers to complain to their vendors.

I’m not aware of a standards-based way to encrypt MPEG-2 video over IP, although at first blush you wouldn’t think it would be too hard to come up with one. But crypto in general seems difficult to get right – witness the difficulties that they’ve had with ssh, which has been designed to be secure from the ground up.

Google (GOOG) and On2 Technologies (ONT) jointly announced Wednesday that they have entered into a definitive agreement under which Google will acquire On2, a developer of video compression technology. The acquisition is expected to close later this year. On2 markets video compression technologies that power high-quality video in both desktop and mobile applications and devices and also holds a number of interesting patents.

Some of its codec designs are known as VP3, VP4, VP5, TrueMotion VP6, TrueMotion VP7 and VP8. Its customers include Adobe, Skype, Nokia, Infineon, Sun Microsystems, Mediatek, Sony, Brightcove, and Move Networks. On2, formerly known as The Duck Corporation, is headquartered in Clifton Park, NY.

Under the terms of the agreement, each outstanding share of On2 common stock will be converted into $0.60 worth of Google class A common stock in a stock-for-stock transaction. The transaction is valued at approximately $106.5 million.

According to the release, $0.60 per share represents a premium of approximately 57% over the closing price of On2’s common stock on the last trading day immediately prior to the announcement of the transaction, and a premium of approximately 62% over the average closing price of On2’s common stock for the six month period immediately prior to the announcement of the transaction.

Important to note is that On2 once had a market cap in excess of $1 billion at its peak, after going public on the American Stock Exchange in 1999 following a merger with Applied Capital Funding (which was already noted at the time). Before its entry on the public market, The Duck Corporation had raised $6.5M in venture capital funding from Edelson TechnologyPartners and Citigroup Ventures.

Back in 2001, On2 made waves by releasing their VP3 compression technology to the open-source community, including their patents on the technology. The technology lives on in the form of (Ogg) Theora. You can find more information about this here.

The agreement is subject to On2 stockholder approval, regulatory clearances and other closing conditions.

Google is reluctant to dive into specific regarding the product plans until after the deal closes, although it’s conceivably related to its immensely popular video service YouTube.

Although we’re not in a position to discuss specific product plans until after the deal closes, we are committed to innovation in video quality on the web, and we believe that On2 Technologies’ team and technology will help us further that goal.

We’ll update everybody when we’re able to share more information. In the meantime, nothing will change for On2 Technologies’ current and prospective customers.

If would be great if Google decides to open-source On2’s VP7 and VP8 video codecs and free them up as the worldwide video codec standards, thus becoming alternatives to the proprietary and licenced H264 codecs. On2 has always claimed VP7 is better quality than H264 at the same bitrate.

Also noteworthy: Google could use the VP8 codec for YouTube in HTML5 mode, basically forcing its many users to upgrade to HTML5-compliant browsers instead of using Flash formats.

Smart move by Google, and possibly great news for innovation in web-based video viewing.

Monday, August 3, 2009

On Monday July 27, 2009, Xilinux announced its newest generation Virtex®-6 FPGA family is compliant with the PCI Express® 2.0 specification, delivering up to 50 percent lower power than previous generations and 15 percent higher performance than competitive offerings. The second-generation PCIe® block integrated in Xilinx® Virtex-6 FPGAs has passed PCI-SIG PCI Express version 2.0 compliance and interoperability testing for 1 to 8-lane configurations, adding to the broad range of design resources from Xilinx and its alliance members that support the widely adopted serial interconnect standard. This significant industry milestone is expected to accelerate mainstream development of high bandwidth PCIe 2.0 systems for communications, multimedia, server and mobile platforms, enabling applications such as high definition video, high-end medical imaging, and industrial instrumentation among others.

Virtex®-6 FPGA was announced on 02/02/2009 for the 1st fime. At up to 50 percent lower power and 20 percent lower cost than previous generations of FPGA, Virtex-6 is implemented on a 40-nm process technology and up to delivers up to 760k logic cells.

With a 1.0-V core voltage with a 0.9-V low-power option, Xilinx claimed that the Virtex-6 offers 15 percent higher performance and 15 percent lower power consumption compared to competitor 40-nm FPGA offerings. These advances enable system architects to integrate Virtex-6 FPGAs into their designs to enable green central offices and data centers, the company said. Virtex-6 comes in LXT, HXT, and SXT versions with different combinations of storage, DSP and serial communications performance.

The Virtex-6 devices offer up to 2.5 times the logic capacity of the Virtex-5 generation and twice the DSP resources. A variety of serial communications transceivers are included on different FPGAs that can operate at 3.2-, 6.5- and 11.2-Gbits per second.

While the Spartan-6 family which was also announced 02/02/2009 is priced at between about $3 and $54 in high volume of 10,000 units, the Virtex-6 range is priced at between $50 and $2,000, depending on FPGA logic capacity and volume.

Formality is a tool from Synopsys, which is used for Formal Verification. Formal verification is a method to verify two designs without runningsimulations that they are functionally equivalent. Of course one design is the 'reference' design, which is supposed to be a 'good' design, and the second design which is called implementation design, is what is sought to match the 'reference' design.