The input to the stochastic orienteering problem consists of a budget B and metric (V,d) where each vertex v has a job with deterministic reward and random processing time (drawn from a known distribution). The processing times are independent across vertices. The goal is to obtain a non-anticipatory policy to run jobs at different vertices, that maximizes expected reward, subject to the total distance traveled plus processing times being at most B. An adaptive policy is one that can choose the next vertex to visit based on observed random instantiations. Whereas, a non-adaptive policy is just given by a fixed ordering of vertices. The adaptivity gap is the worst-case ratio of the expected rewards of the optimal adaptive and non-adaptive policies. We prove an Ω(loglogB)1/2 lower bound on the adaptivity gap of stochastic orienteering. This provides a negative answer to the O(1) adaptivity gap conjectured earlier, and comes close to the O(loglogB) upper bound. This result holds even on a line metric. We also show an O(loglogB) upper bound on the adaptivity gap for the correlated stochastic orienteering problem, where the reward of each job is random and possibly correlated to its processing time. Using this, we obtain an improved quasi-polynomial time approximation algorithm for correlated stochastic orienteering
Cornell University Library
arXiv.org e-Print archive
Computer Security

Bansal, N., & Nagarajan, V. (2013). On the Adaptivity Gap of Stochastic Orienteering. arXiv.org e-Print archive. Cornell University Library .