Verification Guild
A Community of Verification Professionals

 Create an AccountHome | Calendar | Downloads | FAQ | Links | Site Admin | Your Account  

Login
Nickname

Password

Security Code: Security Code
Type Security Code
BACKWARD

Don't have an account yet? You can create one. As a registered user you have some advantages like theme manager, comments configuration and post comments with your name.

Modules
· Home
· Downloads
· FAQ
· Feedback
· Recommend Us
· Web Links
· Your Account

Advertising

Who's Online
There are currently, 68 guest(s) and 0 member(s) that are online.

You are Anonymous user. You can register for free by clicking here

  
Verification Guild: Forums

 Forum FAQForum FAQ   SearchSearch   UsergroupsUsergroups   ProfileProfile  ProfileDigest    Log inLog in 

Meaningful Functional Coverage Metrics

 
This forum is locked: you cannot post, reply to, or edit topics.   This topic is locked: you cannot edit posts or make replies.    Verification Guild Forum Index -> Coverage
View previous topic :: View next topic  
Author Message
romi
Senior
Senior


Joined: Feb 28, 2004
Posts: 88
Location: Minnesota

PostPosted: Thu Mar 11, 2004 11:09 pm    Post subject: Meaningful Functional Coverage Metrics Reply with quote

We've been struggling with how to create meaningful metrics based on the data we get from functional coverage assertions.

One preliminary metric might be to report on the percentage of the coverage assertion that fired once. This points out the logic that is definitely not done being verified. However, 100% in this category does not mean the logic is done being verified. Often times confidence isn't achieved until the assertion has fired N number of times. Where N would be a goal a designer has set up for that assertion. But if N is just a number, is that number really meaningful? Maybe the goal should be based on the percentage of cycles that were run or based on a comparison of one assertion to another assertion, ie. if A fired 20 times B should fire at least 5 times. There is also the argument that some coverage assertions are more important than others so their weight should be higher.

How have others come up with meaningful metrics? Thanks.
Back to top
View user's profile
bdeadman
Senior
Senior


Joined: Jan 06, 2004
Posts: 204
Location: Austin, TX

PostPosted: Fri Mar 12, 2004 9:31 am    Post subject: Reply with quote

Quote:
One preliminary metric might be to report on the percentage of the coverage assertion that fired once


Hi,

I'm not sure what you mean by the word "fired"? My guess is you don't mean "failed", but that the left hand side condition was satisfied in an assertion of the form:

always { sequence1 } |=> { sequence2 };

Is this correct? I think I would be naturally suspicious of any suite of testbenches that doesn't satisfy sequence1 somewhere.

Beyond that I would look for a couple more forms of coverage:

1) parallel paths

{ sequence1 } |=> ({ sequence2 } | { sequence3 } | { sequence4 } } ;

Did sequence2, sequence3 & sequence4 all get used at some point?


2) non-determinism

{ HREADY ; !HREADY[*0..15] ; HREADY }

How completely did you test the non-deterministic response? In this case did you check it with at least 0, 1, 14 & 15 cycle delays? Full coverage would be nice, but it's ususally the boundary conditions that cause problems.

This also raises a coding question - for coverage reasons it's rarely good to use ulimited [*] terms because you don't know what represents full coverage.


3) transition coverage

{ a; b[*0..4]; c[*0..3];(a&&b&&c) }

How completely did you cover the alternative paths

a, abc
a, b, abc
a, c, abc
a, b, c, abc


I'm sure there are other possibilites for metrics!

Regards

Bernard
Back to top
View user's profile Send e-mail Visit poster's website
andyp
Junior
Junior


Joined: Mar 15, 2004
Posts: 5
Location: Parker, Texas

PostPosted: Mon Mar 15, 2004 8:32 pm    Post subject: Re: Meaningful Functional Coverage Metrics Reply with quote

Romi, you wrote:

> We've been struggling with how to create meaningful
> metrics based on the data we get from functional coverage
> assertions. ...

I think you may have the cart before the horse here. Coverage metrics ought to be driven from your verification plan because the verification plan identifies the scope of the verification problem, quantifies it and specifies the implementation solution: a verification environment.

The scope of the verification problem is captured in the coverage section of the verification plan (the other two sections being stimulus generation and response checking). The coverage section should describe how functional, code and assertion coverage will be used to measure verification progress. Each captures a different aspect of the problem. Low level design behavior -- a step above the RTL itself -- is captured with coverage assertions. The metrics precede the deployment of assertions. The resulting coverage section and associated incremental coverage goals serve to quantify the verification problem.

The implementation solution to the verification problem is the functional specification of the verification environment, an orthogonal part of the verification plan.
_________________
Andrew Piziali, <andy@piziali.dv.org>
Back to top
View user's profile Send e-mail
Display posts from previous:   
This forum is locked: you cannot post, reply to, or edit topics.   This topic is locked: you cannot edit posts or make replies.    Verification Guild Forum Index -> Coverage All times are GMT - 5 Hours
Page 1 of 1

 
Jump to:  
You can post new topics in this forum
You can reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Verification Guild ? 2006 Janick Bergeron
Web site engine's code is Copyright © 2003 by PHP-Nuke. All Rights Reserved. PHP-Nuke is Free Software released under the GNU/GPL license.
Page Generation: 0.109 Seconds