*banner
 

Counterexample-guided Planning
Krishnendu Chatterjee, Tom Henzinger, Ranjit Jhala, Rupak Majumdar

Citation
Krishnendu Chatterjee, Tom Henzinger, Ranjit Jhala, Rupak Majumdar. "Counterexample-guided Planning". UAI, July, 2005.

Abstract
Planning in adversarial and uncertain environments can be modeled as the problem of devising strategies in stochastic perfect information games. These games are generalizations of Markov decision processes (MDPs): there are two (adversarial) players, and a source of randomness. The main practical obstacle to computing winning strategies in such games is the size of the state space. In practice therefore, one typically works with abstractions of the model. The difficulty is to come up with an abstraction that is neither too coarse to remove all winning strategies (plans), nor too fine to be intractable. In verification, the paradigm of counterexample-guided abstraction refinement has been successful to construct useful but parsimonious abstractions automatically. We extend this paradigm to probabilistic models (namely, 2\half games and, as a special case, MDPs). This allows us to apply the counterexample-guided abstraction paradigm to the AI planning problem. As special cases, we get planning algorithms for MDPs and deterministic systems that automatically construct system abstractions.

Electronic downloads

  • uai05.ps · application/postscript · 370 kbytes
Citation formats  
  • HTML
    Krishnendu Chatterjee, Tom Henzinger, Ranjit Jhala, Rupak
    Majumdar. <a
    href="http://chess.eecs.berkeley.edu/pubs/75.html"
    >Counterexample-guided Planning</a>, UAI, July,
    2005.
  • Plain text
    Krishnendu Chatterjee, Tom Henzinger, Ranjit Jhala, Rupak
    Majumdar. "Counterexample-guided Planning". UAI,
    July, 2005.
  • BibTeX
    @inproceedings{ChatterjeeHenzingerJhalaMajumdar05_CounterexampleguidedPlanning,
        author = {Krishnendu Chatterjee and Tom Henzinger and Ranjit
                  Jhala and Rupak Majumdar},
        title = {Counterexample-guided Planning},
        booktitle = {UAI},
        month = {July},
        year = {2005},
        abstract = {Planning in adversarial and uncertain environments
                  can be modeled as the problem of devising
                  strategies in stochastic perfect information
                  games. These games are generalizations of Markov
                  decision processes (MDPs): there are two
                  (adversarial) players, and a source of randomness.
                  The main practical obstacle to computing winning
                  strategies in such games is the size of the state
                  space. In practice therefore, one typically works
                  with abstractions of the model. The difficulty is
                  to come up with an abstraction that is neither too
                  coarse to remove all winning strategies (plans),
                  nor too fine to be intractable. In verification,
                  the paradigm of counterexample-guided abstraction
                  refinement has been successful to construct useful
                  but parsimonious abstractions automatically. We
                  extend this paradigm to probabilistic models
                  (namely, 2\half games and, as a special case,
                  MDPs). This allows us to apply the
                  counterexample-guided abstraction paradigm to the
                  AI planning problem. As special cases, we get
                  planning algorithms for MDPs and deterministic
                  systems that automatically construct system
                  abstractions.},
        URL = {http://chess.eecs.berkeley.edu/pubs/75.html}
    }
    

Posted by Krishnendu Chatterjee, PhD on 9 May 2006.
Groups: chess
For additional information, see the Publications FAQ or contact webmaster at chess eecs berkeley edu.

Notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright.

©2002-2018 Chess