Abstract
With serious advertising budget constraints, advertisers have to adjust their daily budget according to the performance of advertisements in real time. Thus we can leave precious budgets to better opportunities in the future, and avoid the surge of ineffective clicks for unnecessary costs. However, advertisers usually have no sufficient knowledge and time for real-time advertising operations in search auctions. We formulate the budget adjustment problem as a state-action decision process in the reinforcement learning (RL) framework. Considering dynamics of marketing environments and some distinctive features of search auctions, we extend continuous reinforcement learning to fit the budget decision scenarios. The market utility is defined as discounted total clicks to get during the remaining period of an advertising schedule. We conduct experiments to validate and evaluate our strategy of budget adjustment with real world data from search advertising campaigns. Experimental results showed that our strategy outperforms the two other baseline strategies.
| Original language | English |
|---|---|
| Title of host publication | Proceedings - 21st Workshop on Information Technologies and Systems, WITS 2011 |
| Publisher | Jindal School of Management, JSOM |
| Pages | 67-72 |
| Publication status | Published - Dec 2011 |
| Event | 21st Workshop on Information Technologies and Systems, WITS 2011 - Shanghai, China Duration: 3 Dec 2011 → 4 Dec 2011 |
Conference
| Conference | 21st Workshop on Information Technologies and Systems, WITS 2011 |
|---|---|
| Place | China |
| City | Shanghai |
| Period | 3/12/11 → 4/12/11 |
Research Keywords
- Budget adjustment
- Dynamical adjustment
- Reinforcement learning
- Search auctions