There are a lot of people expressing anxieties about the risks of AI. These range from the fairly mundane (amplify propaganda and hate speech, take jobs away from humans) to the plausible and terrifying (help people make bombs or other weapons) to the eschatologically world ending (engineer mirror life or diamondoid bacteria and kill everything on the planet).
So, what are your thoughts on this? Where do you think the actual level of risk falls?