Verification Guild
A Community of Verification Professionals

 Create an AccountHome | Calendar | Downloads | FAQ | Links | Site Admin | Your Account  

Login
Nickname

Password

Security Code: Security Code
Type Security Code
BACKWARD

Don't have an account yet? You can create one. As a registered user you have some advantages like theme manager, comments configuration and post comments with your name.

Modules
· Home
· Downloads
· FAQ
· Feedback
· Recommend Us
· Web Links
· Your Account

Advertising

Who's Online
There are currently, 59 guest(s) and 0 member(s) that are online.

You are Anonymous user. You can register for free by clicking here

  
Verification Guild: Forums

 Forum FAQForum FAQ   SearchSearch   UsergroupsUsergroups   ProfileProfile  ProfileDigest    Log inLog in 

Functional Coverage

 
This forum is locked: you cannot post, reply to, or edit topics.   This topic is locked: you cannot edit posts or make replies.    Verification Guild Forum Index -> Coverage
View previous topic :: View next topic  
Author Message
bugfinder
Senior
Senior


Joined: Jan 12, 2004
Posts: 19

PostPosted: Wed Oct 13, 2004 3:53 am    Post subject: Functional Coverage Reply with quote

Functional Coverage is widely used currently in pretty much all RTL verification environments. My personal experience is, it is must to have such feedback in random based environment but it takes a lot of time and manual effort to define them. I mean just the english language representation on what all to cover. Implementation is however easy with HVLs.

In communication SOCs, coverage points typically runs in hundreds and doing a cross coverage on them takes lot of effort both in terms of defining what cross matrix is relevant and analysing where the holes are.

This thread could be used to share the methods, styles used by engineers in defining functional coverage. One can share their experiences, pitfalls which could be very useful for others. Also one can even upload any freeware tool or scripts which could help in defining/analysing the coverage points.
Back to top
View user's profile
Leo
Senior
Senior


Joined: Jan 06, 2004
Posts: 12
Location: San Jose, CA

PostPosted: Wed Oct 13, 2004 12:34 pm    Post subject: Reply with quote

This is a great subject to discuss. I think there are 2 aspects of this discussion:

1. How do you plan the verificaiton and create a suitable, sufficient verification plan
2. How do you collect coverage information, analyze both the coverage and failures and decide the next step...

Both of these can be addressed by starting your project with verificaiton planning and controlling/managing your verification process with a capable tool.

Verisity has a brilliant product called "vManager" which automates the deployment of simulation runs, analyzes failures and coverage data and controls the steps towards closure.

I think engineers and managers will like it very much because it automates the tasks that require intensive human interaction, which a lot of us hate tedious tasks, as well as custom tool development.

For more details information you can check out the web site.

http://www.verisity.com/products/vmanager.html

Cheers,
_________________
-----------------------------------------------------------
Levent Caglar
Solutions Architect
Verification IP & IPCM
Cadence Design Systems
2655 Seely Avenue
San Jose, CA 95134
levent@cadence.com
tel:+1-408-914-6818
Back to top
View user's profile Send e-mail Visit poster's website
akiva
Senior
Senior


Joined: Mar 22, 2004
Posts: 20
Location: Israel

PostPosted: Sun Oct 17, 2004 4:26 pm    Post subject: Reply with quote

I think this is a great topic.

>[BugFinder] My personal experience is, it is must to have such feedback in random based environment but it takes a lot of time and manual effort to define them.

I would go back and analyze what is taking you so long to define them. My guess is that it is more the barrier of "What does this verification guy want from us?!?!" than the time actually spent defining.

Take the alternative for example: (functional coverage vs. test definition)

I can define (functional coverage) that the 'fifo reached overflow'.

or

I can define a test (focused test) that sets a scenario to reach it:
a. Send in 266 bytes of data into the fifo, allow the first 9 bytes to pass through, lock the bus, wait for overflow, read the interrupt clear it. etc. etc.

I've observed the following rules of thumb:
A. Coverage point cost from start to end of a project, including : Definition, Review, Coding, Updating, and Reviewing hit/miss; is ~1hr.

B. Focused Test from start to end of a project, including: Defining, Reviewing, Coding, Debugging, Re-coding for changes, Fixing failures, fixing failure a month later, fixing a failure a few weeks later. cost an average of 3 days. (Do the math on your last project, how many directed tests resulted, and how many man years went into the exercise). -- and by the way if you reuse the direct tests on your follow-on project you don't gain that much.


So where's the catch... (Why don't we see a 27X improvement?? Shocked )

1. When you define functional coverage you usually define it more thoroughly. This is because it is much more concise than a test definition and allows for easier crossing. So you will have ~3X coverage points (items or crosses) more than you would have tests defined. - Overall this will amount to 30X or more buckets in your overall coverage. - Example 1000 Tests ~ 3,000 Coverage points ~ 30,000 things that need to be hit to reach 100%.

2. While 50% of the space will get hit relatively easily with some good tests exercising the main-path, in order to get to 100% you need to build a really good random environment. (either that or write all the tests focused - which would defeat the purpose). So you have to build a much more intelligent environment (3X of the effort) to allow the environment to reach all those great corner cases. So what you have done here is taken a complete "driver" that was once distributed amongst 1000 tests, and built it into a high quality random simulation environment.

So what you are doing when you work with functional coverage is focusing effort , organizing work, and getting better quality out of your product, you aren't necessarily going to tape out in half the time. Just that probability that it will meet your requirements is much higher. But, On your follow-on project, you will see a dramatic improvement in chip readiness, and you will be able to tape out much earlier.

Akiva - akivam@aceverification.com[/quote]
Back to top
View user's profile Send e-mail Visit poster's website
dave_whipp
Senior
Senior


Joined: Jan 06, 2004
Posts: 76
Location: Santa Clara, CA

PostPosted: Sun Oct 17, 2004 5:27 pm    Post subject: Reply with quote

akiva wrote:
either that or write all the tests focused - which would defeat the purpose


I'm not sure I agree with that. A few years ago I was warking on a project that needed a lot of tests written. It was an SoC project, and we had a flow set up that let us write most of the tests in C or assembler. Anyway, to get the tests written, we had almost everyone on the project work flat out completing the tests from the test plan. We met the schedule, and taped out.

On the follow-up project, I reviewed the tests. Over 20% of them did not hit what they claimed to have hit. We got lucky: no serious bugs slipped through.

The moral of the story is not just that its ineffective to have a test-fest coding binge. Its that even directed tests need to consider all three prongs of a verification strategy: how to hit the cases (the stimuli), how to know that the expected behavior happenned (e.g. comparison with a golden model) how to know that you've hit desired cases (functional coverage, aka executable test plan).

The significant difference between random and directed tests is how we generate the directed-random stimuli: computers or monkeys.


Dave.
Back to top
View user's profile Send e-mail Visit poster's website
alexg
Senior
Senior


Joined: Jan 07, 2004
Posts: 586
Location: Ottawa

PostPosted: Sun Oct 17, 2004 8:40 pm    Post subject: Reply with quote

I am proposing the following strategy:

For each unit function, we have to think about test method and check method. Test methods defines how to apply stimulus in order to invoke the function, and check method - how to guarantee that, when invoked, function performs properly. Test method defines stimulus generation strategy: directed, constrained-random or combination of both of them.
Check method can be single (with expected results coded in the test case) or permanent (using assertion/checker/scoreboard). Having done that, we can define functional coverage points and functional coverage goals relative to this unit function. Functional coverage points can be defined either on RTL (for simple functions) or on verification components such as monitors/checkers (for more complex functions). Then, it is desirable to cross these funtional coverage definitions with all interesting static modes or with the coverage defined for the related function(s).

Regards,
Alexander Gnusin
Back to top
View user's profile Send e-mail
bugfinder
Senior
Senior


Joined: Jan 12, 2004
Posts: 19

PostPosted: Tue Oct 19, 2004 2:13 am    Post subject: Functional Coverage vs Testplan definitions Reply with quote

Could you please elaborate on the unit function. Is Unit function defined at an slight higher level than the more detailed level?

For example, If we take a FIFO which has 32 bit data bus, a write enable , read enable, internal write and read pointers. At a higher level, design uses this fifo to store and forward packets which could be from 64 to 1500 bytes. Data is always written as 32 bits wide, invalid bytes are just stuffed with 0's. How we define the unit function in this case.

In traditional test method, I would define the following in my testplan definition just to test the write part.
i) Write function (normal case, no overflow)
ii) Write function with overflow
iii) misc conditions like both write and read enable active at the same time.
Then I would go and probably write a directed test for each case.

Currently in my flow, I define the same in the testplan and then I define coverage points deriving equations looking at internal/external signal to form a normal_case_no_overflow, write_with_overflow , write_and_read_enable_w_overflow, write_and_read_enable_w_nooverflow variables. I define these as "coverable" items using HVL of my choice.(Defining these equations and specifying coverage points are very easy with any HVL).

I would then have some generator and analyzer which has some random attributes to define a packet of some length between 64-1500 bytes. I would then generate the packets with some random interval spacing. As the packets get generated, I would start writing into the fifo and using some scheduling algorithm, will start reading from the fifo again giving some random gaps between reads.
I would probably have one testcase and keep running it again and again till i see all my cover variables says Hit.

Some cases, we define cover variables not as detailed as i mentioned,but just a variable to indicate overflow, a variable to indicate writes, reads etc and then do a cross coverage on them to identify the same conditions as the detailed variables.

This sort of definitions looks easy if we take one small unit at a time. But in a typical SOC, there are hundreds of such units and also more and more higher level layers gets added on top of it. Like, we want to test sending 64 byte packets with some error during overflow or multiple packets come in during overflow. There could even be a higher layer function which if it sees a new packet coming in during overflow, could be setting some attribute bit in a register or raise an interrupt to attached microprocessor . How one defines coverage point in this case.
Detailed approach will prompt me to define cover variables for each possible combination of register bits to values of internal signals. Here, the hole analysis becomes a big issue. I end up with having lot of combinations and painfully we go and remove whatever is not relevant. Chances of verif. engineer making a mistake here gets higher.

All I observe here is, there is some hierarchy formed as far as functions goes. Is there any method people use to define this hierarchy or sort functions in some particular way so that defining these coverage points finally becomes easier.

Is this approach and concern correct or am I unnecessarily complicating things here?
Back to top
View user's profile
hemanth
Senior
Senior


Joined: Aug 16, 2004
Posts: 93
Location: Bangalore

PostPosted: Tue Oct 19, 2004 5:12 am    Post subject: Reply with quote

bugfinder,
I feel that all the process you mentioned in the functional coverage aspects of unitl level function is the way to go but I also think that when we start to get above those base level functions we cannot simply go on considering the higher levels akin to collection of lower levels. Hence the methods of verifying the same will have to take a different approach. Though the higher levels run as dictated by their low level entities, the verification ought to concentrate on the higher level transactions that happen between them as opposed to the unit level tasks. So you would probably be relying on the correctness of unit level functional coverage in verifying the higher levels. This would in essence not blow up the combination of tests we would need to test at each stage. Functional coverage is then actually a metric that we apply to stages of functional abstraction at top levels and test cases the means to exercise them.
Back to top
View user's profile
jmcneal
Senior
Senior


Joined: Jan 12, 2004
Posts: 34
Location: Hillsboro, Oregon

PostPosted: Tue Oct 19, 2004 10:32 am    Post subject: Reply with quote

Bugfinder -

In your fifo example you mention defining coverage points for several situations. If you define the simplest of the points, (Overflow, underflow, full, empty, write, read) you can then make combinations of those events/points and cover all cases at the block/unit level. Then at the next level up (unit) you can selectively use some of the coverage points. For example at the unit level in your example you were interested in overflow when sending 64 byte error packets. At the unit level you can make use of the write and overflow coverage points to test this functionality.

Also at the unit level you shouldn't have to concern yourself with the minute details of verifying the fifo itself, as you have already demonstrated that it works (in the block level verification). At the unit level you are now concerned with whether or not the fifo got connected correctly, does the unit respond correctly to the overflow/underflow errors, and does the unit level perform its job correctly? As you progress up the stack (block, unit, module,...) you can disable more and more coverage gathering from the lower levels.

On one project I worked on we would enable several levels of detail of coverage gathering independantly. For example on nightly regressions we'd only cover the top level stuff (inputs, outputs, major functionality). Then on a weekend regression run the top and module level coverage. We would then turn everything on to do a pre-tapeout run.

j
Back to top
View user's profile
jmcneal
Senior
Senior


Joined: Jan 12, 2004
Posts: 34
Location: Hillsboro, Oregon

PostPosted: Tue Oct 19, 2004 10:37 am    Post subject: Reply with quote

dave_whipp wrote:

The significant difference between random and directed tests is how we generate the directed-random stimuli: computers or monkeys.


Laughing out loud on this one.

I have to agree though I've been in the same situation, 30 "newbies" (to the project at least) writing tests in the last month of the project isn't very effecitve.

j
Back to top
View user's profile
Display posts from previous:   
This forum is locked: you cannot post, reply to, or edit topics.   This topic is locked: you cannot edit posts or make replies.    Verification Guild Forum Index -> Coverage All times are GMT - 5 Hours
Page 1 of 1

 
Jump to:  
You can post new topics in this forum
You can reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Verification Guild ? 2006 Janick Bergeron
Web site engine's code is Copyright © 2003 by PHP-Nuke. All Rights Reserved. PHP-Nuke is Free Software released under the GNU/GPL license.
Page Generation: 0.177 Seconds