Guest Blogger » Search Results » uvmhttp://www10.edacafe.com/blogs/guest
Just another EDA Blogs weblogFri, 11 Nov 2016 18:53:43 +0000en-UShourly1http://wordpress.org/?v=4.1.1DVCon Europe: European Practicality With International Relevancehttp://www10.edacafe.com/blogs/guest/2015/10/30/dvcon-europe-european-practicality-with-international-relevance/
http://www10.edacafe.com/blogs/guest/2015/10/30/dvcon-europe-european-practicality-with-international-relevance/#commentsFri, 30 Oct 2015 19:01:26 +0000http://www10.edacafe.com/blogs/guest/?p=567On November 11th and 12th, DVCon Europe will once again take place in the lovely city of Munich. The inaugural event last year demonstrated a clear need for this event in Europe, with a focus on practical information that allowed the attendees to get a rapid, all-encompassing update on a broad range of design and verification techniques. Furthermore, it also showed the international audience those areas where Europe leads, influencing EDA development and thinking on a global basis.

This year’s show promises an even bigger and better program. It is expected to grow significantly, and indeed, early registrations, the size of the exhibit, and the number of papers and tutorials all bear this out. The theme of the conference, focused on the predominantly European automotive semiconductor segment, acts as a driver for next-generation design and verification across the entire industry, given the absolute reliability requirements of these devices. Subject areas, including system-level abstraction, analog/mixed-signal devices, UVM and other advanced verification, will all be discussed during a number of networking opportunities including a Gala dinner, included as part of the registration.

On the tutorial-focused first day, attendees can chose from fifteen 90-minute lectures and a panel, with time allowed to attend four informative sessions back to back. Delivered by world-class speakers, the tutorials cover such topics as:

Basic UVM, advanced UVM, UVM reuse, all things UVM

Safety-critical and security design & verification

Communication protocol development

Analog/mixed–signal systems

The latest assertion-based verification techniques

Advanced debug

FPGA debug and rapid prototyping

System-level design techniques, including SystemC

The latest work on standards from Accellera

A panel of experts in different verification areas discussing the next advances

Whether you are trying to advance your UVM or formal verification environments, learn the latest design techniques for high reliability devices, leverage various system abstractions, or simply improve overall skills and understanding, this is the practical show for you. A note to managers: in one day your engineers can get an update on the latest techniques that are hard to cover in even multiple training courses.

Building on the success of last year’s inaugural conference, the sessions this year are impressive. Twenty-six papers and ten posters on the second day cover a full range of subjects, hitting on the major challenges facing many development teams. As the General Chair for the conference, Martin Barnasconi of NXP noted, “The reason we hold DVCon in Europe is because of the differences in the local community to other regions. We have our own competence areas and application domains. We are quite strong in the system-level space and in mixed-signal. And we are reflecting that back on issues in verification.”

One addition to the program this year is a gala dinner on Wednesday evening. Included as part of the standard registration, this event will provide an international networking opportunity over a delightful European menu. Meet your peers in companies across Europe and enjoy a relaxed evening of good food and fine wine.

The technical sessions at DVCon Europe will present dozens of new ideas, applications and methodologies to help design and verification engineers do their jobs better and faster. In many cases, commercial solutions can help accelerate the adoption of these new approaches. DVCon Europe 2015 features a 25-company exhibition with knowledgeable staff, from EDA companies, consulting and training partners, and IP vendors large and small:

DVCon US, India and Europe are known for the valuable and meaningful insights they provide for real engineers and real projects. This conference has excelled in providing the right information in a practical manner, while still retaining that European flavor and flair. Stay up-to-date with the latest techniques, tools, and advances in modern EDA. Registration options are flexible, so you can take a quick peek on one day or take in the whole conference and dinner — it’s up to you. In the words of the conference chair “if you are keen on networking, learning and sharing design and verification practices, then DVCon Europe is the place-to-be.” Space is filling up, so please register now.

For more than 25 years, DVCon is the premier conference to discuss challenges and achievements for Functional Design and Verification of Electronic Systems and Integrated Circuits. The DVContinuum includes the well-established DVCon United States in March, augmented with DVCon India in September and DVCon Europe in November (Munich, Nov 11 – 12, 2015).

For each region, DVCon provides a well-chosen mixture of technical paper sessions, tutorials, key notes, posters and exhibits. Sponsored by Accellera Systems Initiative, DVCon attendees get access to the latest information on various Accellera Standards and its application for system-level design, modelling and verification (including UVM, SystemC, SystemVerilog, IP-XACT and many more). The topics include system-level virtual prototyping, IP reuse, design automation, mixed-signal design, low power design and verification. Facilitating DVCon not only in the US but also in Asia and Europe allow networking and discussions in a much broader audience and expand DVCon’s value to wider community than those only who have the opportunity to travel to the US.

As DVCon attendee, you will hear a lot about “shift left” and early verification of complex systems. This is not a new concept at all, even it may look like today. A very epic example for a historic shift left had been called out by John F Kennedy in May 1961: “I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the moon and returning him safely to the earth.” At that time, the required technologies and procedures for a moon landing did not even exist.

For a successful moon mission, not only the actual rocket science engineering but thorough and early verification, continuous learning by the teams and stress testing using system simulation vehicles were key factors. Looking deeper into the story of the successful Apollo 11 landing on July 20, 1969, we get very interesting insights on the importance of the right verification. As the lunar module Eagle made its landing approach to the moon, in short distance to the surface, among other related ones a computer alert 1202 was raised. Steve Bales, the computer expert in Gene Kranz’s Mission Control team, was able to analyze the alert 1202 quickly as an “executive overflow” alarm. This simply meant that the computer was in trouble completing its work in the available cycle time. So the right GO for landing decision was made, and no ABORT with maybe fatal consequences. Because exactly this test case was simulated upfront the Apollo 11 mission, Steve Bales was able to correctly analyze this alert so fast. Just two weeks prior to the Apollo 11 launch, simulation supervisor Dick Koos had thrown in a series of program alarms (including the 1202) during the integrated simulations for the stress testing of the flight controllers and the Apollo 11 crew’s reaction. During this massive testing, the team had failed with the wrong ABORT decision – two weeks later this simulation experience became real and helped Mission Control for the right decision and supported a successful moon landing.

As you may see from this historical example, the DVContinuum addresses the ever increasing complexity, which was well mastered 50 years ago, and exemplifies the importance for our IC industry. Understanding this DVContinuum is vital to meet the requirements and to address the complexity of the “Systems of Systems” verification. Smarter abstraction techniques, automation, stimuli techniques and above all the creativity of Verification Engineers to create the appropriate simulation models in a very efficient way, will help to continuously shift the limits of verification.

If you are interested to experience the DVContinuum yourself, join us at DVCon Europe and see great examples of yesterday’s, today’s and tomorrow’s systems. Looking forward to meeting you in Munich!

]]>http://www10.edacafe.com/blogs/guest/2015/05/15/experience-the-dvcontinuum/feed/0A young entrepreneur’s take on EDAhttp://www10.edacafe.com/blogs/guest/2014/02/20/a-young-entrepreneurs-take-on-eda/
http://www10.edacafe.com/blogs/guest/2014/02/20/a-young-entrepreneurs-take-on-eda/#commentsThu, 20 Feb 2014 16:35:53 +0000http://www10.edacafe.com/blogs/guest/?p=206You’ve all read many times the opinions of EDA veterans concerning this industry, so for a change I’ll humor you with my opinion as a young entrepreneur and EDA company founder. This post originally appeared on my company’s official blog (see original post). What prompted me to write this piece is Gabe’s article on Starting A New EDA Company.

In his post, Gabe is hoping for “disruption” and “a new business model”, yet he notes the “total lack of new ideas from younger people”. Hmm. Well I’m young, and I certainly do think that younger people have lots of ideas, and that they’re actually having a huge success, it’s just that it’s happening in other industries, like, for instance, the software industry. I’m talking about the Facebook, Twitter, Instagram, Snapchat, and whatnot. Now why is that? Why are young people having success in software but not so much in semiconductor? Could it be that there is something specific about the semiconductor industry? After all, until its recent acquisition by Cadence, Forte was still called a start-up. After 16 years (it was founded in 1998). And VCs seem to agree that the money is elsewhere. So what is it?

It turns out that to create a “new successful EDA company”, you should “understand thoroughly the application industry [your company] serves”. Ok, but how is it possible for young people to do that exactly? The semiconductor industry today is mainly about designing SoCs, and that requires many different skills and companies and people working together. It takes years to become proficient in designing quality hardware with RTL, and this is only the first step to making a chip! Then you need to learn about verification (and SystemC, and SystemVerilog, and UVM, and equivalence checking, and I don’t know what else, after all I’m not a verification engineer!), and back-end, and DFT, etc. How are you supposed to thoroughly understand all this without 10 or 15 years of experience?

Maybe this explains that. I have the impression that the semiconductor industry has kind of an “old” feeling attached to it, as if the hardware industry were lagging behind the software industry. Remember when all you had was proprietary software products that were incompatible with each other? This is EDA today. Where else do you need an NDA to read a datasheet or a tool’s documentation? Where else are you forbidden to compare competing products? Sad but true: for example if you read Xilinx’s license, you will see in section 4. Restrictions, sub-section (b) General Restrictions, the following: “Licensee is not licensed to, and agrees not to: (iii) publish or disclose the results of any benchmarking of the Software”. Cadence goes further in its website’s terms of use: “Any postings to chat rooms, forums, message boards, contests, or similar information you submit, as well as any computer code, bug fix, or other creative idea you provide, shall be deemed, and shall remain, the property of Cadence“.

This kind of attitude is part of the problem. Let’s take another example. Most EDA software use the same licensing program, the well-known FlexLM. That stuff is 26 years old. Surely by now you’d imagine we would have a better solution? Well there are alternatives. So why does EDA keep using this one? Is it because this industry is a conservative triumvirate? Is it because these three are just too big? But being a behemoth has never prevented innovation! Agreed, it does make it more difficult, because of the innovator’s dilemma, but many bigger companies still manage to innovate a lot. Google’s revenue for 2013 is about $60 billion, that’s respectively around 30, 40, and 60 times the revenue of Synopsys, Cadence, and Mentor Graphics. If being big does not prevent innovation, what else could?

I think this is a cultural problem. We have a kind of chicken and egg problem, with users who have become afraid of change (including new EDA software) because change has all too often caused problems, and with companies that do not change things because they fear this is is going to cause problems or to make users angry. And in the end, users are the ones who give you money, so you try to listen to them. That’s actually fine, as long as you keep in mind that only a small percentage of users are actually innovators and early adopters, and these are the ones willing to change first; if you convince them, you have a much better chance of convincing the others (more or less easily, see Crossing the Chasm, and the post I wrote about this Are you pre-chasm?). This is a distinctive trait of the semiconductor industry in my opinion: we seem to hear the late majority (to quote the original research, “older and fairly conservative”) voice its opinion much more than one would otherwise expect.

Despite all that, though, I love writing EDA software for all hardware designers who are open to the possibility of improving their design flow. It makes me pretty happy when I meet or talk with them And of course I love designing hardware with the Cx language that we created!

]]>http://www10.edacafe.com/blogs/guest/2014/02/20/a-young-entrepreneurs-take-on-eda/feed/0The Concurrent Design-Flow Experimenthttp://www10.edacafe.com/blogs/guest/2012/08/08/the-concurrent-design-flow-experiment/
http://www10.edacafe.com/blogs/guest/2012/08/08/the-concurrent-design-flow-experiment/#commentsWed, 08 Aug 2012 16:22:21 +0000http://www10.edacafe.com/blogs/guest/?p=52At DAC this year I had a lot of fun doing a live experiment to demonstrate some of the benefits and issues with concurrent design flows. I was at the Cadence Theatre doing a presentation called ‘Controlling the costs of SoC integration‘ and I decided to make the presentation more interactive by creating a design team and seeing some of the effects of getting this team to work concurrently. We demonstrated how a little ‘twist’ caused a big upset for to team deliveries!

Concurrency

The topic I introduced first was how system design flows are now highly concurrent. In the production of a system within a very tight timescale, it would be normal to have architecture definition, software development, virtual prototype development, RTL design and verification all happening at the same time, be it IP, sub-system or SoC level design. I represented this as a set of rotating, interacting cogs.

Having the teams work concurrently means that the product can be delivered in a compressed timescale. However, there are some downsides to this process. If any of the cogs lock, the whole process is disrupted. This implies high levels of dependency and many things are on the critical path. In the animation it was clear that information was flowing around and across the cogs and this was where I was highlighting a major weakness – If the information flow is not fully automated (manual processes), then this could have severe consequences for the design flow. I decided to conduct an experiment to prove this point.

‘The Experiment’

The goal of the experiment was to mimic a concurrent chip development. I wanted to get a system development team from the audience :

An architect

A hardware design engineer

A verification engineer

A virtual prototype engineer

An embedded software developer

I managed to get 5 people (with the promise of a 16GB Memory stick ) and I volunteered as the 6th member of the team – the project manager. The experiment was to complete a specific HW/SW implementation and integration task across the different design teams. The focus was on the HW/SW interface as it is common to all of these teams. I brought along 4 copies of an ARM UART primecell specification. The architect needed to publish (hand out) the specifications to other team members. These team members had to independently implement a single piece of information in these specifications, come together and agree that all implementations were aligned. I highlighted the piece of the specification to be implemented and where to find it in the specification. The ‘implementation’ was simply to write down this single piece of information which was as follows

The reset value of the UARTCR register in Table 3.1 of Chapter 3.2. Presented as follows:

Now for the fun bit: As project manager I gave the team a schedule to complete the HW/SW integration as follows:

The Architect had 5 seconds to hand out all the specs

Each of the implementation teams had 10 seconds to ‘implement’ the specs (Write down the value)

The teams then had 10 seconds to agree alignment

As project manager, I added 5 seconds contingency

.. which I visually represented it as follows:

This was a total of 30 seconds. I got good buy-in from my teams and was ready to start the clock. First though, I had to separate two team members who were sitting close to each other in order to simulate geographically dispersed teams I then gave the teams a countdown; shouted ‘GO!’ and started the stopwatch.

Releasing the specifications. The first thing to do was for the architect to hand out the specifications: And here he is, releasing the specs :

This task took 6 seconds and I (as project manager) started whining that my project was already running late. I’m in the picture below keeping an eye on the stopwatch :

The implementation seemed to be delayed slightly as each team member started looking for the correct piece of information. I called out the chapter, table and register name (I also had the register circled on the specs)

You could see the benefit of working concurrently as these 4 teams were working independently and so I shouldn’t expect to wait a lot of time. It took however 14 seconds to finish out the implementation at which time I was now complaining that my project has a 10%-15% slip and I wasn’t happy. I asked the teams to get together quickly and agree that their implementations were aligned – and to hurry up as the project was already critically late.

After about 8 seconds of the integration phase, I could tell something was up. There was a lot of shaking of heads and a lot of finger pointing. I heard someone say that there were incorrect values in the software, whilst someone else was pointing at the RTL design. Some of the team went back to their chairs and return with the specs to prove their point. Time passed and at this stage, as project manager I was getting exasperated with the schedule slip and ‘demanded’ to know what was wrong!

“It seems as if someone has a different version of the spec” , I was told by the Virtual Prototype engineer.

‘Really?’, I asked sarcastically, then asked the audience if anything like this had ever happened in projects before. My team looked on – not knowing what to do. I said, quite curtly, ‘Well use the latest version of the spec!’ They looked at version numbers and dates and finally they aligned on the correct values for the register. I thanked the team for their input and sent them back to their seats.

The Result

The team’s obeservation was correct, there were two different versions of the UART specification in play (PL010 and PL011) . In one specification the reset value was 0x000 and in the other is was 0x300. The effect on my project was devastating- from spec to alignment it took 136 seconds instead of the predicated 30 seconds over a 4x slip in the project schedule. I presented an example slippage and asked the audience to consider that the timescale was days, not seconds and this seemed to show the gravity of misaligned teams working concurrently. (Slides here show a slippage of 18 whereas it was really 106)

At this stage I introduced SID, the ‘insidious’ bug that can be very prevalent around manual processes and that can actually very quickly contaminate these types of concurrent design flows.

In this experiment SID was lurking behind team misalignments. There weren’t any real implementation bugs but when it came to the misalignments, implementation bugs were raised (e.g. the RTL implementation was deemed to be wrong). For document-driven design processes I showed the types of bugs that contaminate the concurrent design flows and effect design quality:

The types of bugs in this area are as follows:

Specification bugs: Bugs contained in the specifications themselves

Interpretation bugs: Bugs introduced into an implementation by misinterpreting the spec

Translation bugs: Bugs involves in translating from a specification to a specific implementation

Synchronization bugs: Bugs where teams misaligned.

All of these impact on quality and as seen with the experiment can have serious problems in integration schedule and costs.

The Solution

The proposed solution is ultimately more automation in the front-end of the design flow. The main focus is on the transformation of paper-based specification to machine-readable or executable specifications and the automation of these specifications into the different implementation process. This essentially eliminates the aforementioned types of bugs.

This executable specification not only improves quality and synchronization but provides immediate turn-around-time for spec changes thus increasing productivity.

I gave an example of Duolog’sSocrates-Bitwise which can be considered an executable specification of HW/SW interface registers. With Socrates-Bitwise, a user inputs information in a GUI (or imports txt/xml formats) . Coherency checks are run on the specification to ensure all data is coherent. From this specification many different formats can be generated automatically, including documentation, RTL , UVM SystemVerilog, SystemC and C API.

So does this make a difference? Absolutely – for something like the creation of a UART Primecell IP, the graph below the BLUE shows the percentage of Design collateral that is consumed with register implementation.

By automating this blue portion from an executable specification like Socrates-Bitwise we see huge productivity gains, immediate turn-around times for incremental flows and ZERO bugs as these implementations are all aligned to a single source. Automating this at IP and SoC levels has a significant impact on the overall costs of SoC development.

Acknowledgements :

Thanks to the audience members, my team for 3 minutes, who helped me on the day!

Thanks to Joseph Hupcey III, Cadence who allowed me to use his photos in this blog

]]>http://www10.edacafe.com/blogs/guest/2012/08/08/the-concurrent-design-flow-experiment/feed/0Accellera at DAC: Defining a Universal Verification Methodologyhttp://www10.edacafe.com/blogs/guest/2010/06/07/accellera-at-dac-defining-a-universal-verification-methodology/
http://www10.edacafe.com/blogs/guest/2010/06/07/accellera-at-dac-defining-a-universal-verification-methodology/#commentsMon, 07 Jun 2010 23:06:47 +0000http://www10.edacafe.com/blogs/guest/?p=28First of all, we’d like to invite all the DAC attendees to Accellera’s breakfast and panel about UVM: Charting a New Course on Tuesday, June 15, 7:30 am – 9:00 am, Room 203B in Convention Center.

It is no news when one talks about increasing complexity of designing the SoC devices. It is a foregone conclusion that designing is a relatively bounded problem compared to verification. Just as design reuse through Semiconductor IP (aka design IP) helped bring the designers up the productivity curve, in the last decade Verification IP (VIP) has done the same for the verification engineers. Two leading methodologies, Verification Methodology Manual (VMM) and Open Verification Methodology (OVM), helped accelerate the adoption of structured verification methodologies using SystemVerilog as well as the creation of commercially available verification IP to independently validate integration of design IP in SoCs. Essentially, both methodologies are a collection of SystemVerilog classes with inherent semantics for their behavior in different phases of the simulation. The user creates verification objects from these classes and attaches them to the design components as traffic/data generators, monitors, checkers, etc.

Both verification methodologies are built on SystemVerilog, both have been available under Apache license, and both have been successfully deployed in production environments – with one caveat, as long as the verification IP was built on only one of them and not the other. This is where the problem arises. Many projects acquire Verification IP (VIP) from multiple vendors, and occasionally even multiple groups inside a company may have worked independently using different methodologies. Naturally, there is a conflict for integrating such VIPs into one consistent verification environment.

In 2007-08, this was recognized as an issue, and leading users formed the Verification IP Technical Subcommittee (VIP-TSC) under Accellera. By July 2009, 12 recommended practices were formalized in the form of an API to allow interoperability of VMM and OVM VIPs in a single environment. However, it was only seen as the first step in solving a larger problem – that of having publicly available universal base classes that can be used for creating a wide variety of VIPs. Naturally, if all VIPs are based on the same base class library, one does not need to go through an interoperability API. Thus came to life the second phase of the VIP-TSC efforts, UVM base classes.

The UVM base class is based on SystemVerilog. OVM 2.1.1 was used as the starting point to define UVM. In Accellera’s Early Adopter release of the UVM (UVM-EA), there were some enhancements to Callbacks and End-of-Test features, and a new type of Message Catcher callback was added, along with renaming of objects to UVM_*.The VIP-TSC has a list of items that brings features from OVM, VMM and other home-grown methodologies to add to UVM-EA for release 1.0 and beyond. However, current and planned features of UVM base class can be best described as the reflection of collective knowledge of the verification experts participating in VIP-TSC.

In other words, are we just transferring the knowledge from syntactically and semantically different methodologies into a new one? What is the real value to this exercise? If we fast forward by a year, what would UVM base class release X look like? What features should it have to solve the problems faced a year from now? 3 years from now? Are we looking at adding more of the same or make a quantum leap in our ability to deal with much larger and significantly more complex designs? What specifically are we doing to improve our ability to find bugs in the design and then fix them?

This is the topic of our breakfast discussion at DAC, hosted by Accellera and sponsored by its members Cadence, Mentor and Synopsys. The panelists are verification experts from our user and vendor community. Our moderator is no stranger to challenges and stimulating great dialog across the industry. This is the time for you to find out more and chime in. See you there.