Back in 2005 Mark Zuckerberg took a “Computer Science” lecture at Harvard.
He started by talking about some of the strategies that Facebook used to improve its performance, one of them being caching.
After talking for a while he asked the audience for any questions. To his surprise there weren’t any technical questions as such. Well anyone will be surprised I guess. After answering a few questions he finally commented “No CS Questions”.
To build a faster website you need to ship less code. Astro is a kind of static site builder that delivers lightning-fast performance with a modern developer experience.
If there is a real need for some Javscript component, Astro loads it when it’s visible.
It supports all the npm packages, CSS modules etc and is SEO enabled.
How it works?
We know Elon Musk is pretty close to the cutting edge stuff in Artificial Intelligence (AI). In this talk he discusses his concerns about AI.
He answers some of the pressing questions and concerns and why some kind of regulation is much needed.
He understands the threats that the so called AI experts fail to see. And he admits that it scares the hell out of him.
Just 5 mins into this talk and you get an idea of the kind of impact AI can have. Let me add here that he mention the difference between Narrow or Weak AI and Digital Super Intelligence, discussed later.
I am sharing some of the highlights from his talk.
Several AI “experts” think more than they do and they think they are smarter than they actually are. They don’t understand the repercussions. He mentions that the rate of improvement is exponential in this area.
Consider this AlphaGO, in a span of 6–9 months, was able to defeat the champions in the Game. AlphaGo Zero crushed AlphaGo. It learnt by playing itself. You can put in rules for any game and it can pretty much beat the best players in that. Question is did the experts predict that?
Similarly for self driving cars. They are predicting it to be 100–200% safer in an year or two.
Narrow or Weak AI does not pose a risk to species. It will result in lost jobs, better weaponry etc. But Digital Super Intelligence does. Thats why we need to do it very very carefully.
A super intelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. –Wiki
3. He talks about regulations to ensure everyone is developing AI safely. Even though the dangers of AI are far more why no regulations?
To conclude he wishes that these developments are symbiotic with Humanity. And that we don’t create systems that pose a threat to us.