Video instructions and help with filling out and completing Where Form 2220 Governments

Instructions and Help about Where Form 2220 Governments

Talk about your time allocation I think one of the things you spend an awful lot of time thinking about I know is artificial intelligence it's something that you and I have a shared interest that it's something that our audience is interested in as well the question here is a lot of experts in AI don't share the same level of concern did you do about the day traffic rules what what's his last words what's what specifically do you believe that they don't well the biggest issue I see with so-called AI experts is that they they think they know more than they do and they think they're smarter than they actually are in general we are all much smarter than we think we are much less smart dumber than we think you are by a lot so this is this tends to play plague smart people that you just can't that they define themselves by their intelligence and they they don't like the idea that a machine can be way smarter than them so they discount the idea which is fundamentally flawed that's the wishful thinking situation I'm really quite close to I'm very close to the cutting edge in AI and it scares the hell out of me it's capable of vastly more than almost anyone knows and the rate of improvement is exponential you can see that some things like alphago which went from in the span of maybe six to nine months it went from being unable to beat even a reasonably good go player so then beating the European world champion who was ranked 600 then beating Lisa doll for five what would be in a world champion for many years then beating the current world champion then beating everyone while playing simultaneously then then there was alpha zero which crushed alphago 100 to 0 and alpha zero just learnt by playing itself and it can play basically any game that you put the rules in for if you whatever rules you give it it literally read the rules play the game every superhuman for any game nobody expected that great of improvement to guess those so those same experts who think AI is not progressing at the rate that I'm saying I think you'll find that their predictions for things like go and and other and other AI advancements have therefore their batting average is quite weak it's not good that we'll see this also with was self-driving i I think probably by anammox year the self-driving will be well encompass essentially all modes driving and be at least a hundred to two hundred percent safer than a person by the end of next year we're talking maybe eighteen months from now and it's a third study on on Tesla's autopilot version one which is relatively primitive and found that it was a forty five percent reduction in highway accidents and that's despite what a pilot one being just version one version two I think will be at least two or three times better that's the current version that's running right now so the rate of improvement is really dramatic we have to figure out some way to ensure that the advent of digital super intelligence is one which is semiotics with humanity I think that's the single biggest existential crisis that we face and the most pressing one and how do we do that I mean if we take it that it's inevitable at this point that some version of AI is coming down the line how do we how do we steer through them well I'm not normally an advocate of regulation and oversight I mean I think it's once you generally go inside minimizing those things but this is a case where you have a very serious danger to the public and it's therefore there needs to be a public body that has insight and then oversight on to confirm that everyone is developing AI safely this is extremely important I think a danger of AI is much greater than the danger of nuclear warheads landlocked and nobody would suggest that we allow anyone to just blow up nuclear warheads if they want that would be insane and mark my words AI is far more dangerous than nukes far so why do we have no regulatory oversight this is insane what question you've been asking for a long time I think it's a question that's come to the forefront over the last year where you begin to realize that it doesn't necessarily I think if we've all been focused in on the idea of artificial superintelligence right which is clearly a danger but maybe you know a little further out what's happened over the last years you've seen artificial what I've been calling artificial stupidity talking about you know algorithmic manipulation of social media like we're in it now it's starting it's starting to happen how do we how do we is it what's the intervention at this point so I'm not really all that worried about the short-term stuff things that are other like narrow AI is not a species level risk it will it will result in dislocation in lost jobs and you know that sort of better weaponry and that kind of thing but it is not a fundamental species level risk whereas digital super intelligence is so it's really all about laying the groundwork to make sure that if if humanity collectively your science that creating digital super intelligence is the right move then we should do so very very carefully very very carefully this is the most important thing that we could possibly do building on that other other than AI and the the other issues that you're you're tackling transportation energy production aerospace what issues should our next generation of leaders be focused on solving what else is coming down the line well I mean there there are other things that are