Communication networks

a new reward model for MDP state aggregation with application to CAC and routing

Document identifier:
Access full text here:10.1002/ett.1007
Keyword: Natural Sciences, Computer and Information Sciences, Naturvetenskap, Data- och informationsvetenskap, CAC, MDP
Publication year: 2005
Relevant Sustainable Development Goals (SDGs):
SDG 9 Industry, innovation and infrastructure
The SDG label(s) above have been assigned by


An optimal solution of the call admission control and routing problem in multi-service loss networks, in terms of average reward per time unit, is possible by modeling the network behavior as a Markov decision process (MDP). However, even after applying the standard link independence assumption, the solution of the corresponding set of link problems may involve considerable numerical computation. In this paper, we study an approximate MDP framework on the link level, where vector-valued MDP states are mapped into a set of aggregate scalar MDP states corresponding to link occupancies. In particular, we propose a new model of the expected reward for admitting a call on the network. Compared to Krishnan's and Hübner's method [11], our reward model more accurately reflects the bandwidth occupancy by different call categories. The exact and approximate link MDP frameworks are compared by simulations, and the results show that the proposed link reward model significantly improves the performance of Krishnan's and Hübner's method.


Ernst Nordström

Högskolan Dalarna; Datateknik
Other publications >>

Jakob Carlström

Other publications >>

Record metadata

Click to view metadata