Cargando…
Motivating Time-Inconsistent Agents: A Computational Approach
We study the complexity of motivating time-inconsistent agents to complete long term projects in a graph-based planning model proposed by Kleinberg and Oren (2014). Given a task graph G with n nodes, our objective is to guide an agent towards a target node t under certain budget constraints. The cru...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer US
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6447518/ https://www.ncbi.nlm.nih.gov/pubmed/31007568 http://dx.doi.org/10.1007/s00224-018-9883-0 |
Sumario: | We study the complexity of motivating time-inconsistent agents to complete long term projects in a graph-based planning model proposed by Kleinberg and Oren (2014). Given a task graph G with n nodes, our objective is to guide an agent towards a target node t under certain budget constraints. The crux is that the agent may change its strategy over time due to its present-bias. We consider two strategies to guide the agent. First, a single reward is placed at t and arbitrary edges can be removed from G. Secondly, rewards can be placed at arbitrary nodes of G but no edges must be deleted. In both cases we show that it is NP-complete to decide if a given budget is sufficient to keep the agent motivated. For the first setting, we give complementing upper and lower bounds on the approximability of the minimum required budget. In particular, we devise a [Formula: see text] -approximation algorithm and prove NP-hardness for ratios greater than [Formula: see text] . We also argue that the second setting does not permit any efficient approximation unless P = NP. |
---|