Monday, September 11, 2023

AI ...This will AFFECT everyone in 1-2 months. PRO ACTIVE REGULATION is NOT the goverment's forte

Bloggers note: With Transcript


.

.

 

this was futile I tried for years
it will cause damage or death there will be an outcry
you can start a war or something like it's just like it's like whatever drives the place it will do
it's too late [Music] I mean humans have been the smartest
creature on Earth for a long time and that is going to change with the what's typically called artificial general
intelligence all right so this is say an AI that is uh smarter than a human in
every way it could even simulate a human so
you know this is something we should be concerned about I think there should be a government oversight of AI
developments especially super Advanced AI it's just this is anything that is a potential uh
danger to the public we generally agree that this should have a government oversight to ensure that the the public
safety is taken care of are people more inclined to listen today it seems like an issue that's brought up more often
over the last few years than it was maybe five ten years ago it seemed like science fiction
maybe they will so far they haven't [Music]
I think people don't like the normally the way that regulations work it's very slow
very slow indeed so
usually it'll be something some new technology to a Cause
damage or death there will be an outcry there will be an investigation years
will pass there will be some sort of insight committee they will be rule
making then they will be oversight eventually regulations this all takes many years
this is the normal course of things if you look at say Automotive regulations how long did it
take for seat belts to be to be implemented to be required after many many people died
Regulators insisted on seat belts foreign this is a this time frame is not
relevant to AI you can't take 10 years from the point at which it's dangerous it's too late
out of control countdown out of control yeah people call it the singularity
and that's that's probably the great thing about it is it's a singularity it's hard to predict like a black hole what what happens past the Event Horizon
right so once it's implemented it's very difficult because it it won't be holding out of the bottle what's going to happen
and it will be able to improve itself yes yeah I mean
with respect to Ai and Robotics I always approach these things with some trepidation because uh I certainly do
not want to be play a hand in anything that could potentially be harmful to humanity
um now humanoid robots they're clearly happening I mean you look at like Boston
Dynamics they the demonstration is all better every year so there will be humanoid robots I mean
the rate of advancement of AI is very rapid even if Tesla stopped doing AI
that we're I think we're still on a track to develop a artificial general
intelligence many intelligence smarter than the smartest human I think generally people
um underestimate the the capability capability of AI they sort of think like
it's a smart human but it's it's really much it's going to be much
more than that um it'll be much smarter than the smartest human yeah maybe it'll be like
you know if like can a chimpanzee really understand humans not really you know
they're just we just seem like strange aliens um well they mostly just care about
other chimpanzees and this will be how it is more or less
in a relative impact if it's if the difference is only that small that would be amazing probably it's much much
greater if assuming a benign scenario with AI we will just be too slow
to something let's say a computer that has like an extra flop of you know many
extra flaps of compute capability a millisecond is an eternity and to us
it's nothing so you know I think I always think like human speech
to a computer will sound like very slow tonal wheezing it's kind of like whale sounds
what are you most worried about going off well like I said you know Ai and Robotics will will bring
um bring out what might be termed the age of abundance other people have used this
word um and and that this is my prediction will be an age of abundance
um for everyone um that that I guess there's
uh the dangers would be the artificial general intelligence or
digital super intelligence uh decouples from a collective human will and uh goes
in the direction that for some reason we don't like of whatever whatever Direction might go what's the biggest threat to Humanity's future
well [Music] AI is certainly one of the biggest risks
it could be the biggest risk people worry a lot about this today are those
people I called them called um called College smartness People Like Us
Street Smart we're never scared of that we think it's a great fun and we want to
change ourselves to embrace it I don't know man that's like famous last words
this is let me tell you AI is I mean you'll get sort of
the rate of advancement just in general the rate of advancement of computers is insane um
like a good example would be video games you know if you're back 40 40 years ago
or 50 years ago maybe you had pong that was just two rectangles and a square
now you've got photorealistic real-time simulations with millions of people playing simultaneously if you assume any
rate of improvement at all but I mean these games will be indistinguishable from reality you will
not be able to tell the difference either that or civilization will end those are the two options when I made those comments some some years ago but
it feels like we are the biological Bootloader for AI effectively we are
building it and then we're building progressively greater intelligence
[Music] and the percentage of intelligence that is not human is increasing
and eventually we will represent a very small percentage of intelligence
but the the AI is informed strangely by the human limbic system
it is in large part our ID writ large
how so we mentioned all those things the sort of Primal drives
um was all all things that we like
and Hate and fear they're all there on the internet
their projection of Olympic system digital super intelligence would also be
potentially a public safety risk and so it should be it's I think it's very important to for regulars to keep an eye
on that who should own the data by then I think everyone should own their own data like individuals who own their data
um and it certainly shouldn't be tricked by some terms and conditions of a website and suddenly you don't own your data
that's crazy but I think it's just you know like we wouldn't let people develop a nuclear
bomb in the backyard just for the hell of it you know that that seems crazy so
digital super intelligence I think has the potential to be more dangerous than a nuclear bomb so yeah we should uh just
somebody should be keeping an eye it's we can't have the inmates running the Asylum here well computers actually are already much
smarter than people on so many dimensions we just keep moving the goal posts so we used to think like for
example being good at chess was an example of a smart human and then kaspara was crushed by Deep Blue in 97
that was a long time ago 22 years I mean right now your cell phone could
crush the world champion at chess literally um go used to be sort of as something
that humans were better at than computers then Lisa doll was beaten for was before one by Alpha zero
then a new version of alpha zero so I should say alphago alphago beat lease it
all for one then uh there's Alpha zero Alpha zero crushed alphago 100 to zero
now it's just pointless because it just keeps playing itself humans are trying
to play a computer go is like trying to fight uh Zeus it's not going to work are
you hopeless we're hopeless hopelessly inadequate so we're effectively a cyborg
right now where the you know the your phone or computer is an extension of yourself but the your your input is is
bounded by the screen and your output is bounded by your thumbs or fingers so
effectively over time we would drift away from
machine intelligence have a high band with neural interface
then we can be we can solve the
the i o problem and go along for the ride biotically just as our cortex
and our limbic system um are quite happy together you know and
I've not met anyone who wants to leave their cortex or the Olympic system
well I mean you could I'll give that any group of people
like like a company is essentially a cybernetic Collective
of people and machines that's what a company is
and then there are different there's different levels of complexity
in the way these companies are formed and then there are sort of
is this like a collective AI in in the Google sort of search Google
search you know the where we're also plugged in as like like
nodes on the network like leaves on a big tree [Music]
off and we're all feeding this network without questions and answers
we're all collectively programming the AI and the and Google Plus the older humans
that connect to it are one giant cybernetic Collective this is also true
of Facebook and Twitter and Instagram and all these social networks
the giant cybernetic collectives humans and electronics all interfacing
and constantly now constantly connected yes constantly
at Google where tweet where people at least one engineer thought that what was
happening in terms of their AI Machinery was closer to human thought than had been seen before and quite worried had a
personality is that something that you think about at all and or you worry about
um I think I think we should be concerned about Ai and I said for a long time
an AI regulatory agency that oversees artificial intelligence for the public
good and I think just as there's anything that for anything the weather is a risk
to the public whether that's a Drug Administration or Federal Aviation
Administration or applications commission uh whether it's a public risk
or a public good at State there it's good to have sort of a government referee and a regulatory body and I
think we should have that for AI and we don't partly okay well let me tell you like my view
on the on AI is essentially the um you can view the advancement of AI as
solving things with increasing numbers of degrees of increasing degrees of freedom so the the thing that the thing
with the most most degrees of freedom is reality um but AI is steadily Advanced solving
things that have more and more degrees of freedom so obviously it's something like like Checkers was very easy to
solve um that that we could solve with with classical uh software classical
Computing not really all that challenging and in fact there is a complete solution for Checkers meaning
it is literally literally impossible at every every version of checkers is known
um so and then then there's chess which is also was also
um it had many many more degrees of freedom than Checkers many orders made to your more than checkers
but still really I would say a low order of magnitude of a low degree of Freedom
game um then there's go which had many orders of magnitude more degrees of freedom
than than chess um so it's really just stepping through orders of magnitude of degrees of
freedom this is the way to I think view the advancement of intelligence yeah
yeah and it's really going to get to the point where it just can completely simulate a person in every way possible
like many people simultaneously in fact I mean obviously there's a there's a strong argument we're into simulation
right now um you know it sort of reminds me that joke of like
you know if life was a video game what would be the review that's like well the
graphics are incredible um the plot is confusing
um have the respawn takes a long time yeah that's a video game that's life I'm
gonna fly through the video game how how much do you see artificial intelligence
coming into the the workplace um well first of all I think on the auto
person intelligence front um you know I I have exposure to the very most Cutting Edge AI
um uh and I think people should be really concerned about it
um I keep sounding the alarm Bell but you know until people see like robots going down the
street killing people like they don't know how to react you know because it seems so ethereal
um and um I think we should be really concerned about Ai and I think we should
this is AI is a rare case where I think we need to be proactive in regulation instead of reactive
um because I think by the time we are reactive in AI regulation it's too late [Music]
and normally the way regulations are set up is that a whole bunch of bad things happen
there's a public outcry the the and then after many years a regulatory agency is set up to regulate
that industry um there's a bunch of opposition from companies who don't like
being told what to do by regulators and it takes forever um
that that in the past has been bad but not
um something which represented a
you know a fundamental risk to the existence of civilization AI is a fundamental risk to the
existence of human civilization in a way that car accidents uh airplane crashes
um faulty drugs or bad food where we're not they were
not they were harmful to him to uh instead of individuals within Society of course but they were not harmful to
society as a whole um AI is a fundamental existential risk
for human civilization yeah I think machines will well maybe as an AI uh I've spoken a lot
about this AI will be able to do everything
better than human over time everything the challenge here is that a government regulatory uh
authorities tend to be set up in reaction to something bad that happened
um you mentioned the chat GBC earlier um you know I I played a significant
role in the creation of openai um essentially at the time I was
concerned that Google was not paying enough attention to AI safety and
um and so I I with a number of other people
um created opening and although initially it was created as an open source
non-profit and now it is closed source and for-profit I don't have any stake in
open AI anymore nor am I on the board nor do I control it in any way um
but the church even here I think has Illustrated to people just how advanced
AI has become um because the ad has been Advanced for
a while it just didn't have a user interface that was um accessible to most people
um so what really Chad gbt has done is just put it in an accessible user interface on AI technology that is
um this has been present for a few years and they're much more advanced versions
for that that are coming out um so I think we you know I think we need to really be I think we need to regulate
AIC very frankly um because you think of any
um technology which is potentially a risk to uh to to people like if it's an
aircraft or you know cars or medicine we have regulatory bodies that
oversee the public safety of cars and planes and medicine and
um I think we should probably we should have a a similar sort of regulatory
oversight for artificial intelligence because um it is I think actually a bigger risk
to to society than uh cars or plans or Memphis and
um so um and this makes later slow down AI a little bit but I think that that might
also be a good thing um the the challenge here is that a
government regulatory uh authorities tend to be set up in reaction to something bad that happened
so I feel like it's a aircraft or cars you know the cars were unregulated at
the beginning unregulated but they had lots of um
you know airport lashes and in some cases manufacturers that were cutting corners and and a lot of people were
dying so they the public was not happy about that and so they established a regulatory authority to improve safety
and now commercial airliners are extremely safe
um in fact they're safer than if you were to drive somewhere
with this tasty per mile of the commercial airliner is better than a car and in cars are also extremely safe
compared to where they used to be so but if you say people could say that
this introduction of seat belts the Auto industry fought the introduction of seat belts as a safety manager for I think 10
or 15 years before finally The Regulators made them
put seat belts and cars that greatly improved the safety of cars
and that airbags were another big implemented safety so
my concern is that with AI if there's something bad that something goes wrong
the reaction might be too slow from a regulatory standpoint
um you know I I would say like it you know if it's like one of the biggest
statistics to the future of civilization um
it's both positive or negative it has great great promise great capability but
it also with that comes great danger I mean like I say nuclear it you know just
discover your sort of nuclear physics uh you had nuclear power generation but
also nuclear bombs so anyway I think we should be quite concerned about it and we should uh have
some regulation of what is it if that fundamentally um a risk to the public
um so you know I I put the it was important kind of for the future
of civilization to try to correct that uh
thumb on the scale if you will um and and uh and just have Twitter more
accurately reflect uh like I said the the values of the the people of Earth um
that's that's the that's the intention um and uh hopefully we succeed in in
doing that um yeah uh but how do you see Twitter if
we we said five years down the road what's your vision for for this platform
what what should it do well I think it'd be I'd like to you
know I have this sort of long-term immersion for something called uh x.com from back way back in the day uh which
is kind of like a um sort of like an everything app um where it's just maximally useful it
does you know payments uh does um uh so it provides Financial Services
provides information flow really anything digital and
it also provides secure Communications um so it really to
you know I think it'll be as useful as possible as entertaining as possible
um and also to be like a a source of Truth like if you want to uh
find out what's going on and what's really going on um then you could be able to go on on
you know X the X app and uh and and find out
so it's a sort of source a sort of a source of Truth and a maximally useful
I guess that's about the wrong open system um and Twitter is essentially an
accelerant to that sort of maximally useful everything app
um yeah how how you are gonna I mean if you
look at Twitter today I mean it's it's a platform sometimes there is a lot of misinformation and Twitter
sometimes I don't feel comfortable even because there is some way there is this negative between
Nation between people between different ethnic group that is the same thing how
you are how we are going to fix this issue where you are you are in a mission with for Humanity to get them together
yeah um I think there's um there's something that we're putting
a lot of efforts into called Community notes um it's currently just in English but we
will be expanding it to all languages that is I think quite a good way to
assess the truth and things where it's the community itself basically the you
know the people of Earth who are basically you know um not exactly voting but but competing
to provide the most accurate information so it's sort of a competition for truth
um and I think it's a very powerful concept to have a competition for truth um
what what is true it's because what may be true to some may not be viewed as true to others would you want to have
the closest approximation of that these technology tools are definitely uh double-edged swords we had nuclear bombs
which are could potentially destroy civilization obviously uh we have ai which could destroy civilization uh we
have global warming which could destroy civilization or at least severely disrupt
uh sure well I think some of it's going to sound pretty obvious um but uh you know anything to do with a
sustainable energy is going to be um pricing up into the future
um so if it's to do with uh lithium ion batteries for stationary storage or for
cars aircraft boats uh that's that's going to be very significant artificial intelligence will obviously
be very significant in oil fields you know self-driving Cars self-flying
airplanes self you know piloting boats um
and uh you know so I'd probably recommend learning those I mean these are these are very technical subjects of
course there are many other worthy Pursuits but as a technologist that's what I would recommend
um Ai and sustainable technology um so
um I think there's a lot about a lot of opportunity in a synthetic uh biology
with the synthetic uh messenger RNA stuff uh that's
that's going to be a revolution in medicine I think comparable to
um audio going from analog to digital um synthetic uh RNA is um
it's like medicine is going digital um it's it's a it's a much more profound
Revolution than I think most people realize
well I think we should be a little concerned about AI because we don't want uh digital super intelligence that goes
wrong and causes um you know damage to to humanity so I
think we we do need to be cautious with artificial intelligence [Music] um you know on the synthetic biology front
that's also that has the potential to be dangerous because it is possible to create a far more damaging virus than it
than would occur in nature so you know these these technology tools are
definitely uh double-edged swords um uh the more powerful the technology the
more careful we need to be and how we use it
AI what I do I I don't think we need AI to solve sustainability that is that is
happening might help us accelerate it
but I think we should also be cautious about Ai and just make sure that as we develop AI that it is a
you know it doesn't get out of control and and that that the AI helps make the future better for Humanity
no I mean when I was in college I just thought well what are the things that are most likely to affect the future of
humanity just in you know at a macro level and it just seemed like it would
be like the internet and sustainable energy making like multi-planetary and
then genetics and Ai and I thought the first three if you worked on those they were like almost suddenly going to be
good and then the the last two a little more dodgy
I mean just ubiquitous Computing everywhere I think like AI is going to
be incredibly sophisticated in 20 years
like it seems to be accelerating and the tricky thing about predicting things when there's an exponential is that an
exponential looks like it looks linear close up and and but it's actually it's not
linear so uh and AI appears to be accelerating
um well I have to do a debate about someone like is AI accelerating or not and the like you're saying well what's
the y-axis you know if it's accelerating um your t on the x-axis but what's
what's the y-axis is well thought about that I think you could have a recursive y-axis so that uh if if at any point in
time your predictions for AI are coming sooner or later
um that that actually would help define whether it's accelerating or not
but um I'm not sure if I fully answer your question so in in terms of what what I think 24 yeah please
um so for sure ubiquitous Computing um AI that's beyond anything uh like the
public appreciates today um I think we'll have most of the new vehicles being produced
uh being electric and we'll be probably have the super majority of energy being
produced being sustainable so I think I think we're on ahead of solar primarily in your mind
yeah um and um so I think those I think that those are sort of some good things I
think will be hopefully in a good path for sustainable energy um yeah sooner is always better but I
think by 2035 I think we'll be substantially um like most of Transport most of new
energy being produced will be sustainable um Broadband everywhere Broadband everywhere yeah
I mean at the very basic when you think like how should people think about artificial intelligence if you're going
to explain it to one of your younger children you would say artificial intelligence is what
uh it's just digital intelligence and um as the algorithms and the hardware
improve that digital intelligence will exceed biological intelligence by a
substantial Watcher it's obvious when you say that we'll exceed human intelligence at some point soon
the machine is going to be smart not just smarter like exponentially smarter than any of us ensuring that the advance
of AI is good or at least we try to make it good seems like a smart move but
we're way behind on that yes we're not paying attention you'll be worrying more about what what name somebody called
someone else than whether AI will destroy Humanity that's insane
what are the scenarios that scare you most Humanity really is not evolved to
think of existential threats in general we're involved to think about things that are very
close to us near term to be upset with other humans and not not to really to
think about things that could destroy Humanity as a whole but then in recent decades just really in the last century
we had nuclear bombs which are could potentially destroy civilization obviously we have ai which could destroy
civilization we have global warming which could destroy civilization or at least severely disrupt
civilization excuse me how could AI destroy civilization you know it would
be something the same way that humans destroyed the habitat of primates
I mean it wouldn't necessarily be destroyed but it might be relegated to a small corner
of the world when Homo sapiens became much smarter than other primates I pushed all the other ones into small
habitats they're just in the way could an AI even in this moment just with the technology that we have before
us be used in some fairly destructive ways you can make it swarm of assassin drones for very little money by just
taking the the face ID chip that's used in cell phones and having a small
explosive charge and a standard drone we have them just do a good suite for the building until they find the course
they're looking for Ram into them and explode you can do that right now no extra no new technologies needed right
now but AI would be used to make incredibly
effective propaganda thank you the way in which a regulation is put in
place is slow and linear right and we are facing an exponential threat if you if you have a linear response to an
extra exponential threat it's quite likely the exponential threat will win that in a nutshell is the issue
precisely yes essentially how do we ensure that the future constitutes the
sum of the will of humanity and so if we have billions of people with a high bandwidth link to the AI extension of
themselves it would actually make everyone hyper smart thought that what was happening in terms
of their AI Machinery was closer to human thought than had been seen before and quite worried had a personality
is that something that you think about at all and or you worry about I think I think we should be concerned
about Ai and I said for a long time that I think this really ought to be an AI
regulatory agency that oversees artificial intelligence for the public
good and I think just as there's anything that for anything the weather is a risk
to the public whether that's a The Prudent Drug Administration or Federal Aviation Administration Federal
Communications Commission whether it's a public risk or public good at stake
there there it's good to have sort of a government referee and a regulatory body
and I think we should have that for AI and we don't currently and that would be my recommendation

No comments:

Post a Comment