Joined: Jan 05, 2004 Posts: 1325 Location: Los Angeles, CA
Posted: Thu Feb 26, 2004 3:23 pm Post subject: Metrics for estimating design and verification task
Is there any accepted industry standards or
guidelines/practices for determining how long a design or verification task
should take. For example do people just use lines of code per day matrices
or is something more sophisticated used?
This was a question that was recently posed to me. I would like to hear from this group your comments.
At my old organization, chip design and verification estimates for proposal work were based on precious actual designs, fudged upward to consider complexity, and fudged downward to consider improvement in processes/technology and instructions from the "guy with 2 pointy haircones" .
Of course every project has a schedule (specs, reviews, designs, verification ). However, I always felt that actual progress was often ambiguous and dictated by ambitious schedules and by what management wanted to hear. For example, at what point in time can you say that a subblock is designed? WHen the code is written, but unverified. When it's verified visually against written specs? when it's inserted in other subblocks? When the design is exercised with directed tests? pseudo-random? when x% of code coverage is reached? when y% of functional coverage is reached? What about structural verification (deadlock, terminal states, linting)? Another schedule killer is the back-end work for ASIC release.
I personaly believe that advances in technology is currently shortening the design and verification task. These include
1. ABV for requirement defintion of latencies, interfaces, and control functions.
2. ABV in the design and verification process
3. Use of comercial libraries of verification units, typically for statndard busses.
4. Use of IPs.
5. Integration of higher languages for signal processing
6. other ... please add more ... _________________ Ben Cohen http://www.systemverilog.us/
* SystemVerilog Assertions Handbook, 3rd Edition, 2013
* A Pragmatic Approach to VMM Adoption
* Using PSL/SUGAR ... 2nd Edition
* Real Chip Design and Verification
* Cmpt Design by Example
* VHDL books
Joined: Feb 10, 2004 Posts: 73 Location: St Louis, Mo
Posted: Sun Feb 29, 2004 12:39 pm Post subject:
This post is a perfect place to rant.
The sub-block is not done until it is tested. That is so important let me restate it with added emphasis: THE SUB-BLOCK IS NOT DONE UNTIL TESTED! Same goes for blocks, units, chips and systems.
A good way to track this is through a system that I think is called Earned Credit. (Yes John, I am admitting I like it, you were right.) Basically you break down your tasks and assign a certain percent of completeness to certain milestones. For example : starting is worth 5%, writing code but not yet finishing it is 20%, finishing code but not yet tested is 20%, testing code but not fully tested/debugged is 20%, tested and debugged code is 30%, and writing your documentation is the remainder. Or something like that. There are many ways you can break that down, just as long as the testing is represented somewhere. And as you move from block to chip to system, the breakdown will become more influenced by testing.
Managers like this because they can average all the tasks percentages and come up with a % done. Managers like hard numbers. But more importantly it builds in the importance of testing into getting a task done, and explicitly states what is expected out of everyone.
YOU ARE NOT DONE UNTIL ALL TESTING IS DONE FOR THAT LEVEL.
I also find it good to classify test by where it will effect you first. Basically functional test are divided into something like: LAB, BETA, PRODUCT. Lab bugs will keep the machine from coming up in the lab and prevent any progress to be made. Beta would be bugs that affect your first test deploys of the product, and Product would be bugs that would effect the end user. And each of these would have HIGH, MEDIUM, and LOW priority based on how important they are to the overall design. Then when you are testing you focus on LAB-High test, and work your way down. Then combining that with the earned credit thing, you can have a report saying Lab test are 50% complete, Beta are 10% and so on. You should never tape out if you haven?t completed and debugged all lab test at least.
Another thing you said that really ticks me off (pet peeve) is about management bringing in schedules because that is what people (investors) want to hear. You cannot lie to yourself about your own capabilities. Most bottom up schedules created by the engineers doing the work are already optimistic. Pulling in those optimistic schedules you are just shooting yourself in the foot. What is going to end up happening is that you will cut corners and that will backfire, People will work to meet dependencies that will never manifest, and you will exhaust your most valuable resource ? your engineers. This will lead to more work for your engineers to do, and poorer quality work that will take AT LEAST as long as the more realistic schedule, and cripple your product. Pulling in your schedule without adjusting other aspects will increase the time it takes to do the project!!!! This is basic industrial engineering stuff, There are so many studies done on this in general that it just boggles my mind why our industry just ignores it. I wish more EE?s would take at least one IE class. If it can be done it can ONLY be done the right way! By being honest about your capabilities and not looking for high-risk shortcuts.
And the time in verification is exponentially related with the complexity of the design. It is not a linear curve! The best way to beat this is by moving up the abstraction curve. HVLs are great but you need some good OO programmers, and assertions are great also (or so I hear). As your design gets bigger you MUST move to new technologies. You are fighting an exponential curve. Gains in experience are more linear, and also tops out after a few projects.
RTL Design task, in my opinion, ends somewhere during verification period (more likely, closer to the the end of verification period).
So, the most important question is: when there is the end of verification period?
Since verification is the science with many X's, there is no clear answer to this question. IMHO, one of the good answers is: when the confidence is built that design performs correctlly, looking from the different points of view.
Leaving apart the question how we acheive the confidence, I would like to explain what do I mean by "different points of view".
The most important one is definitely the point of view based on the Functional Specification. Functional spec defines Verification plan, Verification Plan - set of verification goals as well as metrics for achieving the confidence that each one of verification goal was achieved (such as functional coverage). I would call if "Spec-based point of view".
Second one may be "Code-based point of view". Looking from the point of view of code coverage (which includes line, FSM, toggle and conditional coverage), we may find interesting corner cases dropped from verification plan.
Third, we may look from designer's point of view. Designer may implement some complex and error-prone spots of logic, which requires special attention from verification standpoint. However, in most cases Functional Spec does not contain this information, and Verification team is not alerted about potential problems which are obvious for designer. Here is the case when verification eng. have to interact closely with designers, attending microarchitecture review meetings or reviewing design structure with designer. The result of this work may be assertions definition and implementation as well as additional test cases written to test specific implementations.
Forth, we may look from the "Application" point of view. What scenarios will be run in the lab for the post-production chip validation? What are the traffic patterns through our chip? What is the packet types distribution, when we are workin with most commonly-used applications?
Fifth, there is "structural" point of view. We may use LINT or synthesis tools to find connectivity problems etc. Data and control signals communication through asyncchronous interfaces is the source of many bugs, hidden from the functional verification.
We may think about more "points of view". In most cases, they nicely complement each other, giving us more complete picture of verification task in general. Though it is important to explore each one of them, these "points of view" may consume unequal amount of verification efforts. I would suggest, "Spec-based" point of view may consume up to 70-80% of all verification effort.
Scheduling the Verification Phase of a design is notoriously difficult with estimates being off by factors of 2x or more on a regular basis. Verification cycles are ultimately a function of code quality. SW folks have been dealing with defect densities for some time and the same measures can be applied to RTL verification. If you assume some number of bugs per x lines of code and multiply by the average fix time (debug, code repair, retest etc...) then you begin to get something quantitative. Factor in %time actually spent on verification (as opposed to meetings, coffee breaks etc...) and then number of verifyers involved and you get a task duration. Of course, the key elements are defect density and fix times. I've been collecting some metrics on the designs I've been involved with, but much of the code has been debugged before I got here. What "virgin" code I have measured suggests a DD of .5% or so. Does anyone else collect these metrics? What sort of DDs are people seeing?