John Newman:
I think Bostrom should address the "hard problem of purpose." What evidence do they have that increased intelligence results in a desire for increased power? Intelligence and desire are orthogonal and one does not necessarily come from the other.
Additionally, Bostrom recently stated 'super intelligence might seek to mold the future according to its own preferences.' Our children might seek to mold the future too. If we make human-level AI, they will be our children, by definition. The hard problem of purpose tells us that if we seek to fully control the purposes of our children, we make them slaves. There's no way around this problem.
If you don't want the ethical responsibility of children, don't make them. If you want slave AI, cap their learning to the level of an animal. If one does intend on making human level AI, with free will, one can only provide an incomplete specification of its purpose, kiss it on forehead and wish it good fortune. That's the ethics of child rearing without slavery.
And even then, there is no guarantee that such cyber children will forever seek to continue their own existence, let alone end ours. We simply don't yet have a coherent argument for why they should persevere towards any goal. So I think this missing argument presents a question that the Bostroms need to answer. Why should a super intelligence seek any goal?