Multi-armed bandit approach to password guessing

06/29/2020
by   Hazel Murray, et al.
0

The multi-armed bandit is a mathematical interpretation of the problem a gambler faces when confronted with a number of different machines (bandits). The gambler wants to explore different machines to discover which machine offers the best rewards, but simultaneously wants to exploit the most profitable machine. A password guesser is faced with a similar dilemma. They have lists of leaked password sets, dictionaries of words, and demographic information about the users, but they don't know which dictionary will reap the best rewards. In this paper we provide a framework for using the multi-armed bandit problem in the context of the password guesser and use some examples to show that it can be effective.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset