Today an article was published by Venture beat on “OpenAI’s six-member board will decide ‘when we’ve attained AGI’”.
https://venturebeat.com/ai/openais-six-member-board-will-decide-when-weve-attained-agi/
Another super interesting article published on AGI and OpenAI ….but also raises more questions:
what is their definition of AGI (artificial general intelligence), why do they need to “decide” on this and what are the implications?
For the first question we can agree to disagree since dozens of definitions are handled by different academics and authors. When we go back as far as 1965 we can refer to I.J. Good’s famous quote from ‘Speculations Concerning the First Ultraintelligent Machine’: “Let an untraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever.”
Or maybe keep it even more simple like John C. Lennox does in his book 2084 “Systems that can do all that human intelligence can do”.
Elon Musk shared last week at the UK AI summit that we have about three years left until we reach AGI. Do you agree and what is your definition?