Verification Guild
A Community of Verification Professionals

 Create an AccountHome | Calendar | Downloads | FAQ | Links | Site Admin | Your Account  

Login
Nickname

Password

Security Code: Security Code
Type Security Code
BACKWARD

Don't have an account yet? You can create one. As a registered user you have some advantages like theme manager, comments configuration and post comments with your name.

Modules
· Home
· Downloads
· FAQ
· Feedback
· Recommend Us
· Web Links
· Your Account

Advertising

Who's Online
There are currently, 57 guest(s) and 0 member(s) that are online.

You are Anonymous user. You can register for free by clicking here

  
Verification Guild: Forums

 Forum FAQForum FAQ   SearchSearch   UsergroupsUsergroups   ProfileProfile  ProfileDigest    Log inLog in 

Automating coverage feedback

 
This forum is locked: you cannot post, reply to, or edit topics.   This topic is locked: you cannot edit posts or make replies.    Verification Guild Forum Index -> Coverage
View previous topic :: View next topic  
Author Message
Newsletter
Original Contribution


Joined: Dec 08, 2003
Posts: 1107

PostPosted: Wed Dec 17, 2003 5:48 pm    Post subject: Automating coverage feedback Reply with quote

(Originally from Issue 4.19, Item 5.0)

From: Paul Zehr

Simulation-based functional validation (pre-silicon) tends to have the problem where coverage feedback into test generation is manual. I would like to know if anyone has successfully automated this process on a reasonably complex design. I am particularly interested in the algorithm chosen to generate the new tests
Back to top
View user's profile
Newsletter
Original Contribution


Joined: Dec 08, 2003
Posts: 1107

PostPosted: Mon Jan 05, 2004 10:45 pm    Post subject: Reply with quote

From: Avi Ziv

The manual work needed for analyzing the coverage reports and
translating them to directives for the test generator, can constitute
a bottleneck in the verification process. Therefore, considerable
effort is spent on finding ways to automate this procedure, and close
the loop of coverage analysis and test generation. This automated
feedback from coverage analysis to test generation, known as Coverage
Directed test Generation (CDG), can reduce the manual work in the
verification process and increase its efficiency.

In general, the goal of CDG is to automatically provide directives
that are based on coverage analysis to the test generator. This can be
further divided into two sub-goals: First, to provide directives to
the test generator that help in reaching hard cases, namely uncovered
or rarely covered tasks. Achieving this sub-goal can shorten the time
needed to fulfill the test plan and reduce the number of manually
written directives. Second, to provide directives that allow easier
reach for any coverage task, using a different set of directives when
possible. Achieving this sub-goal makes the verification process more
robust, because it increases the number of times a task has been
covered during verification.

In the Simulation Methods Department of the IBM Haifa Research Lab, we
developed new approach for coverage directed test generation. Our
approach is to cast CDG in a statistical inference framework, and
apply computer learning techniques to achieve the CDG
goals. Specifically, our approach is based on modeling the
relationship between the coverage information and the directives to
the test generator using Bayesian networks. A Bayesian network is a
directed graph whose nodes are random variables and whose edges
represent direct dependency between their sink and source nodes. Each
node in the Bayesian network is associated with a set of parameters
specifying its conditional probability given the state of its parents.

Bayesian networks are well suited to the kind of modeling required for
CDG, because they offer a natural and compact representation of the
rather complex relationship between the CDG ingredients, together with
the ability to encode essential domain knowledge. Moreover, adaptive
tuning of the Bayesian network parameters provides a mean to focus on
the rare coverage cases.

In a nutshell, the design of the Bayesian network starts with
identifying the ingredients (attributes) that will constitute the
directives to the test generator on one hand, and to the coverage
model on the other. These attributes are dictated by the interface to
the simulation environment, to the coverage analysis tool, and by the
specification of the coverage model in the test plan. These
ingredients are used as the first guess about the nodes in the graph
structure. Connecting these nodes with edges is our technique for
expert knowledge encoding. Obviously, using a fully connected graph,
i.e. with an edge between every pair of nodes, represents absolutely
no knowledge about the possible dependencies and functionalities
within the model. Hence, as the graph structure becomes sparser, it
represents deeper domain knowledge. At this point, hidden nodes can be
added to the structure, either to represent hidden causes, which
contribute to a better description of the functionalities of the
model, or to take on a role from the complexity stand point.

After the Bayesian network structure is specified, it is trained using
a sample of directives and the respective coverage tasks. To this end,
we activate the simulation environment and construct a training set
out of the directives used and the resulting coverage tasks. We then
use one of the many known learning algorithms to estimate the Bayesian
network's parameters (i.e. the set of conditional probability
distributions). This completes the design and training of the Bayesian
network model.

In the evaluation phase, the trained Bayesian network can be used to
determine directives for a desired coverage task, via posterior
probabilities, MAP and MPE queries, which use the coverage task
attributes as evidence. For example, in a model for which the
directives are weights of possible outcomes for internal draws in the
test generator

We successfully applied our CDG technique to the coverage of a large
coverage model (~20,000 coverage events) used in the verification of a
Storage Control Element of an IBM z-Series server. Using our CDG
technique, we were able to cover more than 95% of the coverage events
in the model using about 5,000 test-cases. In comparison, coverage of
the same model with test directives provided by the user was less than
80% after more that 30,000 test cases.

More details on our CDG technique and description of some of our early
experiments can be found in two papers we published on this subject:

- Shai Fine and Avi Ziv, "Coverage Directed Test Generation for
Functional Verification using Bayesian Networks", In Proceedings of
the 2004 Design Automation Conference (DAC), June 2004

- Shai Fine and Avi Ziv, "Enhancing the Control and Efficiency of the
Covering Process", In Proceedings of the 2004 High-Level Design
Verification and Test Workshop (HLDVT'03), November 2004

These papers also describe and provide references to some alternative
approaches to CDG, such as genetic algorithms and expert systems.

- Avi Ziv, Verification Technologies Department
IBM Research Lab at Haifa, Israel[/list]
Back to top
View user's profile
srini
Senior
Senior


Joined: Jan 23, 2004
Posts: 436
Location: Bengaluru, India

PostPosted: Sat Jan 24, 2004 7:34 am    Post subject: Re: Automating coverage feedback Reply with quote

Newsletter wrote:
(Originally from Issue 4.19, Item 5.0)

From: Paul Zehr

Simulation-based functional validation (pre-silicon) tends to have the problem where coverage feedback into test generation is manual. I would like to know if anyone has successfully automated this process on a reasonably complex design. I am particularly interested in the algorithm chosen to generate the new tests


Not from a research background, but from a practical standpoint, I want to present our recent experience on "Using formal methods to fill functional coverage holes". We recently finished writing the second edition of Ben Cohen's PSL book (http://www.vhdlcohen.com), and during the course of this work we defined few coverage points (in PSL of-course) that a primitve testbench couldn't hit through Simulation. We then used formal tools to try and generate test cases to cover these holes. We have presented this in our latest book on PSL.

Few disclaimers:

1. Since this work was done for a book, the main goal was the methodology than to "verify it in full".
2. Since the language (PSL) and tools are still evolving, we only took simple examples, not a "reasonably complex design" - though we strongly believe that the methodology can be applied to blocks/sub-blocks as tools mature. In fact TNI-Valiosys, a French EDA firm has a similar approach and works in conjunction with Specman, see http://www.improveware.com.


Thanks,
Srinivasan & Aji
http://www.noveldv.com
Back to top
View user's profile Send e-mail Visit poster's website
Martin1234
Senior
Senior


Joined: Jan 27, 2004
Posts: 110

PostPosted: Tue Jan 27, 2004 2:40 pm    Post subject: Reply with quote

Newsletter wrote:
From: Avi Ziv
in the model using about 5,000 test-cases.


Avi,

Can you elaborate on what "5000 test cases" mean? I suspect you mean a few test cases generating 5000 different sets of constraint solver's solutions.

Martin
Back to top
View user's profile
jeffli
Senior
Senior


Joined: Jan 09, 2004
Posts: 32

PostPosted: Thu Jan 29, 2004 11:28 pm    Post subject: Re: Automating coverage feedback Reply with quote

Quote:

Simulation-based functional validation (pre-silicon) tends to have the problem where coverage feedback into test generation is manual. I would like to know if anyone has successfully automated this process on a reasonably complex design. I am particularly interested in the algorithm chosen to generate the new tests

The reason for coverage feedback into test generation is to avoid valueless tests. Ideally, this needs to be done starting with the first test. It can be valueless if a test hits a coverage point twice. The perfection is probably to hit every coverage point exactly once with a set of tests.

This perfection is not known to be possible, but lots of known efforts have been made to rearrange simulation, coverage analysis and test generation operations more efficiently to approach this perfection. Many of them are called formal verification.

Click here to see a paper on Inclusive Simulation. Like formal verification, it has test generation, coverage analysis and simulation operations all under one cover, but it has simulator-like interfaces and capacity. For better efficiency, it does not maintain the big collection of coverage data. It only looks at coverage issue when it decides whether a simulation operation is redundant. It assumes that all points both controllable and observable from the testbench are coverage points (also including any combinations of them). This coverage goal is the most ambitious possible, but it is not too impractical if coverage data is not kept. Its focus is to get as much coverage as possible. It tells how far it is from the complete coverage. It can merge the coverage achievements from multiple concurrent runs. It reports only indirectly what coverage points are hit, which is OK if there is no better way to hit the coverage holes.

Each of these approaches may have its own strengths and weaknesses. Trying more of them may generate better ideas.
Back to top
View user's profile Visit poster's website
jeffli
Senior
Senior


Joined: Jan 09, 2004
Posts: 32

PostPosted: Sat Jan 31, 2004 9:42 am    Post subject: Re: Automating coverage feedback Reply with quote

Newsletter wrote:

Simulation-based functional validation (pre-silicon) tends to have the problem where coverage feedback into test generation is manual. I would like to know if anyone has successfully automated this process on a reasonably complex design. I am particularly interested in the algorithm chosen to generate the new tests

An ICCAD 2000 paper about Synopsys Ketchum shows its algorithms and experiments. Not sure how it is related to Ketchum/Magellan. The authors' email addresses are all published, and they can tell more.

SureSolve is probably the only commercial tools of this kind. Searching on www.eedesign.com can show its history from the announcement in 1998 to the company merger in 1999. It seems disappeared now. Its algorithm is published in U.S. patent 6,141,630 (online version available at http://www.uspto.gov/patft/index.html). Searching on Internet, you can find some of these authors' contact information at Verisity. They know what happened before it went away.

This kind of tools can be misleading. First, if the design is missing some states, 100% state coverage does not tell anything about these missing states. Second, reaching a state is not the same as verifying a state's behavior because the state's consequence has to be observed and compared against the expectation.

The tests' variety is probably the better coverage to focus on. This is normally done manually through test plans. It can be possible for some tools to help. These tools should understand what tests are valid and what observed behavior is used to compare with the expectation. This almost requires the same information that is normally in testbenches.

Also, most tools (especially the "smart" ones) do not understand users' view of what tests are more critical than others. Constraints can be used to force the tools to ignore certain tests, but these constraints can be as complex as normal testbenches. Inclusive Simulation is probably the first new technology with the traditional concept of priority (gray shades of what tests are important).
Back to top
View user's profile Visit poster's website
z
Senior
Senior


Joined: Jan 09, 2004
Posts: 92

PostPosted: Sun Feb 15, 2004 12:59 am    Post subject: Reply with quote

Coverage is important. However, completely relying on coverage is hard. In addition to Jeffli's 2 reasons, it is also hard to define coverage measurements that are not too strong and not too weak. If the coverage measurement is too weak, some bugs are not hit when 100% coverage is achieved. If it is too strong, 100% coverage can be impossible to achieve. For example, state coverage of a FSM can be too strong because some states can be impossible to reach. It is theoretically possible to identify all reachable states (or feasibility of any strong coverage measurements) but it may be practical only for simple cases.
Back to top
View user's profile
Display posts from previous:   
This forum is locked: you cannot post, reply to, or edit topics.   This topic is locked: you cannot edit posts or make replies.    Verification Guild Forum Index -> Coverage All times are GMT - 5 Hours
Page 1 of 1

 
Jump to:  
You can post new topics in this forum
You can reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Verification Guild ? 2006 Janick Bergeron
Web site engine's code is Copyright © 2003 by PHP-Nuke. All Rights Reserved. PHP-Nuke is Free Software released under the GNU/GPL license.
Page Generation: 0.166 Seconds