Welcome to this new episode of The Context. Today, I want to talk to you about Jolting AI. The increasing rate of acceleration in artificial intelligence applications. I often talk about accelerating change. Which measures the rate of change in a given period of time and then comparing the rates of change tries to interpolate and to understand what are the phenomena underlying it.
The visual representation in various kinds of charts of this rate of change. Of course depends on what you want to highlight. When the rate of change is very small and the variation in the rate of changes also small then perhaps a linear chart is going to be fine. On the y-axis, you will have units of whatever you want to represent: 1, 2, 3, 4, 5. However, when we have an accelerating rate of change typically we use a logarithmic chart where the y-axis represents orders of magnitude. Every unit is going to be an increasing order of magnitude: 1, 10, 100, 1,000 and so on. So when we are talking about an accelerating change the mathematical function that embodies this is the exponential function. For example, two to the power of x. And the exponential function will be a rapidly increasing curve, when represented on a linear chart. But when represented on a logarithmic chart the exponential function will be a line.
Now when we are talking about the technologies that enable society to take advantage of various types of innovation, which is in turn analyzed and expressed in terms of accelerating technological change, we must of course reference the law of accelerating returns that was formulated by Ray Kurzweil in his book of 1999, The Age Of Spiritual Machines. And then of course again further analyzed and and represented in various ways in for example, The Singularity Is Near.
What Ray formulates is that the traditional understanding that we have, a diminishing return from increasing investment in a given technology in a given industry, is true on a small scale, but if we look at a larger scale we can actually observe the opposite. An equal investment will generate a higher than expected return. So how can both of these be true?
What happens is that we have for any given technology the traditional S curve of a technology be being experimented with, then an ever increasing understanding of how the technology can be exploited and applied. When we squeeze out every possible advantage of the technology, we will have a plateau where effectively further investments are not going to provide the kind of returns that we expect and this S curve is for any given single technology.
But the law of accelerating returns, the exponential curves that we talk about when we talk about accelerating technologies looks at a successive technologies substituting each other and designing the curve that we are finally looking at. One of the most famous examples of using this kind of paradigm is more slaw that formulated over 50 years ago said that electronic circuits would.
Double the density of their components every 18 months and then adjust it to every two years. And this is not a natural law, it is a self fulfilling prophecy a projection of our desires and expectation of of our abilities. In order for many competing teams around the world to strive to be the first to arrive at a given breakthrough and.
Ray actually generalized this and looked. At many technologies that preceded the sixties of the 20th century when Gordon Moore in formulated his observation he looked at electromechanical relays he looked at the old tubes he looked at many things that went through their individual as curves and then smoothly one on after another fulfilled the same kind of expectation of doubling the competition.
Al power in a given period of time, that could have been. More or less what more slaw also formulated. So more slaw applies to transistors that are then put together in integrated circuits that in turn form the CPUs the central processing units that we have in our computers in our mobile phones in our servers where we connect.
Over the internet when we browse web servers and so on. And an important. Additional. Component in understanding what is going on in the world of technology is what is called the innovators dilemma? Formulated by Clayton Christensen who recently passed away. It is the pretty dramatic decision that an industry.
Leader has to make to stop serving its current set of customers. And invest instead in the development of the technologies that when put into a given set of products will serve their future customers and the dilemma is in the fact that any investment in that future is subtracting resources in investing in the already successful present.
And the short-sightedness of those leaders that can't understand the necessity of this. Stems from the fact that if they don't do it, somebody else is going to do it so they will be disrupted and they will stop being the leader of that future generation of products. And it is indeed the case that it is very difficult to disrupt oneself it is extremely difficult to say I will take away resources from improving my current generation of products because I realize that I have to embrace a new maybe unproven technology.
Because I understand I believe in the fact that it is going to be. An essential technology of the leading solutions of tomorrow. So. Even though. Intel is the leader an undisputed leader of the era of CPU computing of personal computers of servers intel is not a leader in the next generation of computing that we are already seeing around us which is based on GPUs in graphical processing units the leader of that is Nvidia.
GPUs are made of transistors just like CPUs but their architecture is massively parallel, they are optimized rather than executing an arbitrary kind of program that is taken sequentially their architecture is optimized for executing those kinds of programs that can be broken down in hundreds or thousands of or millions of similar parts that are executed.
Simultaneously across the architecture of the chip. The simplistic example of these kinds of programs are the video games that we play where calculating and designing and representing the scene whether we are fighting against aliens or driving in a car simulator in a racing game or any other kind of graphically intensive task.
The kind of calculations that the computer has to make are basically the same pixel by pixel not the result of the calculations but a kind of calculations. Are in some other companies recognized this early and they created chips specialized for these graphically intensive tasks. And they became important in that they became a leader in that.
Now. It is today. Frequently reported in mainstream media articles, or even some specialized articles that more slaw is ending. And that is mistakenly equated to innovation in computers ending as well. Now, the era of traditional CPUs becoming ever more powerful may be. Progressing towards an end for many reasons and but the age of innovation in computers is definitely not.
So let's get back to what I started with. The increasing rate of acceleration in artificial intelligence. About ten years ago. It was observed and then fully embraced that the then leading type of AI. Architecture was approachable efficiently by using GPUs. That kind of approach is still the leading approach today.
It is a subset of machine learning called artificial neural networks and especially deep learning which is the type of artificial neural networks where there are many many layers hundreds or thousands or tens of thousands of layers connecting the inputs with the outputs. And each of these layers makes some calculation on the data and passes the calculation on to the next layer and the calculation and the optimization and then the execution of the optimized what we say the trained neural network is extremely efficient.
If rather than run on traditional CPUs if run on GPUs instead. So what? Is happening is that there is a progressive learning curve. That is applied not only within a given technology but across generations of technologies and this learning curve enables the acceleration of change except that we are not talking about the mirror acceleration of change anymore we are talking about an increasing rate of acceleration that derives from the learning curve being applied across many generations of technologies.
And the rapid rapid coming together and employment of that learning by not only the specialists in hardware, but also the specialists in software in infrastructure architectures and so on. Actually in terms of technology generations, we are now talking about specialized. AI chips that go beyond optimizing the architecture of the chip and the integrated circuit using cell transistors.
Not only recognizing that the parallel nature of GPUs is is great but that we can go beyond and implementing in hardware the kind of calculations that artificial neural networks apply the in deep learning need achieving either even greater results, so for example. Google designed such a chip and they are calling it TPUs.
Tensor processing units from the name of the mathematical calculations that these specialized the chips have to execute so we we went from CPUs to GPUs and now specialize the AI chips such as for example, the TPUs. Now. A few months ago. Stanford University published their. Hundred plus page report on the state of the AI industry and on page.
I think 65 or something of that report they. Published a chart that. Represented how the rate of doubling in the performance of the computer infrastructures if we take into account the infrastructure available for artificial intelligence applications. Changed. From following more slaw for the past 50 years to following a different curve over the course of the past 10 years.
And they calculated the amount of computation available compute in terms of the global infrastructure that a given set of problems require. Expressed in petaflop per second days and this unit of measure is similar to what you would look at in the energy consumption of your home where the power available is expressed in kilowatt and kilowatt hours is the amount of energy that your house consumes and and needs in order to function.
So a typical Western European house has three kilowatt of power available and then if you turn on your washer and dryer and your hair dryer and your dishwasher at the same time. You often end up exceeding that power available and and your energy provider will trigger a breaker circuit and you will realize oh my god, I have to turn some appliance off.
So similarly to that we can look at what are the applications that we can practically and usefully attack. At the availability of a given power in computation, how many petaflops per second we can deploy. If we need 10,000 years to complete a task, we will just not do it.
If we can train a neural network to solve a given challenge fast enough, and then that neural network can be applied to the task after being trained usefully, all of this within budget and within a given amount of time then we can actually solve problems that were unsolvable before.
So if we were on the curve of Moore’s law exponential on a linear chart linear on a logarithmic chart. Over the course of the past eight ten years, we would have seen. A 7-fold improvement in this availability of compute and in the type of problems that as a consequence we are able to attack.
Instead as Stanford University mapped the availability of. Computer power and the application of that computer power how much we were able to dedicate to a given problem set. They saw that over the course of the past eight years which they went to look. We had a three hundred thousand fold improvement.
So they took a given route and together with Sanford University, also OpenAI, another organization dedicated to the analysis and implementation of advanced artificial intelligence applications looked at this dataset and they said okay, let's do a linear interpolation of the first set and then another linear interpolation of the second set. They concluded that the doubling used to be two years according to Moore’s law before, and now the doubling is between three-four months.
Now, why not? That kind of simplistic approach is possible. But I propose a little more sophisticated approach rather than doing a linear interpolation, we can draw an exponential curve on the logarithmic chart and say that what we are looking at is the increase in the rate of acceleration of our computing infrastructure when we take into account the latest software and hardware architectures.
That is why I say that AI is jolting. Jolt is the first derivative of acceleration. Jolt represents an increasing rate of acceleration and what we are seeing today, is that AI is jolting. So what are the consequences? What does this imply? First of all that we can expect, even within the current set of applications, potentially the rate to increase further. But if my paradigm is correct, what we have to watch out for is that there will be an even more important increase in the availability of compute for next generation applications. Will that doubling increase be one month instead of three, will that be one week? What will that mean and when will that be available?
We have to study the numbers better we have to try and forecast what kind of software and hardware components are going to be available, but I expect that we will have this new disruption. Somebody like Stanford or OpenAI will once again simplistically represent it through another linear interpolation, but it is more appropriately represented by an exponential curve on the logarithmic chart.
Quantum computers are going to be applied to AI problems or maybe the reverse when we will use AI systems to design better quantum computers. There are already teams that are studying what each of these could be, what does it mean to design a neural network that runs natively rather than on GPUs or TPUs AI chips, that runs on quantum computers.
Quantum computers are so massively parallel as to require an entirely new understanding of how the universe works. Or rather multiple universes since one of the interpretations of quantum phenomena is the multiverse view of the universe. What does it mean to structure an AI application such that the output of that AI application is a better quantum computer? Most likely that AI application will already be running on a quantum computer.
The technological singularity is the hypothetical moment in time in a future when the rate of change in the world is such that unaided humans are unable to comprehend it and whether it comes from self-modifying artificial intelligence leading to the so-called intelligence explosion, whether it comes from other factors, it feels a little bit like we may starting to be there.
Because we haven't been designing microchips with pencil and paper for a long time. We haven't been programming line by line for the past 10 years. First we gave up designing hardware ourselves, we used computer aided design, we used computers to design hardware.
Now for the past 10 years we gave up, willingly so in order to be more effective and more efficient, designing software. We use neural networks that are designing the software instead. So, when this comes together, and software designing hardware designing software is going to be applied to a rapidly increasing set of problems, that is in many ways what we can call the singularity.
The way that startups meet investors and try to agree to close the investments should not be mysterious. There are many phases of course, and there are so many things that can go wrong. But when they go right, obviously, everyone gains the founders who receive the financial means needed to implement their idea and healthfully grow the project and the investors
Who have the opportunity to back successful startups gaining the financial return they expect.
I decided to contribute to the demystifying of the process by starting the network society ventures . I give startups the opportunity to meet me online and explain their idea, their business model, why their team is the best to execute it illustrate what amount of money they are raising, what is going to be the use of funds, and what they expect the outcome to be. Also, issues around competition exits, in all the questions that are asked in a typical investor pitching meeting.
The network society ventures pitching live series
is streaming on YouTube, Facebook and Twitter. And anybody can catch it that I will choose. And if the question is relevant and interesting, based on my opinion, I will turn it around to the startup entrepreneur pitching at the time and answering that question will contribute to clarifying the whole process.
I am looking forward to be able not only to execute this in person, but also to rely on other hosts that want to participate, who I will train on what to ask how to go about reading
recording a live stream that is useful both for network society ventures itself, but also to the startup, as well as the .
Working together with incubators and accelerators gives the opportunity to the startups graduating from their programs to present online. There are a lot of ways of course, that these presentations can be executed already. Any startup can set up a camera, start talking to it, and talk passionately about their idea and their project. However, there is a big difference in having somebody in front of you, who plays devil's advocate who is asking the hard questions needed, who knows where to probe and put you on notice if what you are telling
is is not really cutting it in
is not passing the muster. So that is definitely an important value to the startups. Now, of course,
it is not compulsory to come on network society ventures pitch in life, somebody could say that their idea is too precious that they don't want to share it, or that they would want to share it on only under a nondisclosure agreement. This is a very frequent, naive objection on the side of startups. And it is naive because it presumes that your idea is so unique and so precious, that nobody else can ever have it, and it shouldn't be exposed to the world. And of course, that is almost never the case. The idea will
occur most likely to someone else. Remember, Google wasn't the first search engine
Facebook wasn't the first social network. YouTube wasn't the first video platform. All these were born after somebody else had the idea already. So what is the difference? The difference is the ability to execute and also timing and none of those will diminish. If you are passionately
sharing your idea talk about it, whether privately with an investor, whether at a cocktail party or whether in an online streaming pitching session, like I am doing with network society ventures between live
sometimes it can happen that you want to protect a certain subset of your ideas or the particular way that the idea is implemented. The most important reason for this is because you are about to
patent on the idea where prior disclosure could impede the patent to be accepted actually, and naturally enough, it doesn't depend on whether you are talking on a live stream like we are doing or whether you are posting a blog or any other communication medium. You have to be careful and working together with your lawyers, it will be important to understand what you can say and what you cannot say. Another reason why a startup may not want to participate is because they are not ready. But if they are not ready to go live with their pitching, they are not ready to be pitching in private either. So I believe that this is a great opportunity to explore and expose the ideas and
And I decided that that is how I am going ahead From now on, I receive an average of about a dozen
pitch decks every day I
receive them, I look at them, I have my analysts and Associates, analyze them and file them. But of course, it is impossible to set up a specific conversation with each of those. And there is already a filter there. I reach out to the particular startups that catch my interest who are going to be interesting for network society ventures. These are seed stage startups operating at the intersection of exponential technologies and decentralization. We are
geographically neutral. Whether the startup is formed by a theme that is European, or American or Asian, whether it is African, it doesn't matter. The theme can also be distributed. Often at the early stages the decision of where incorporate is sale to be made. And of course, our recommendation is going to be to incorporate as a Delaware C Corp. Today 2020. There is both the fastest, the cheapest and the easiest decision to attract venture capital at each stages.
The opportunity to participate in network society ventures pitching live is actually broader than the specific set of companies that we at the end would be ready to invest in. We will lose and the current
Right there yeah a little bit exactly because we want to bring this opportunity as a contribution to the ecosystem ourselves. And of course, we will always be very explicit with the startups and some of them will understand others will dedicate their half an hour to some other activity. Instead, network society ventures between live is half an hour of the startup side and half an hour of the host time such as me and the event is divided in two parts. The first part is I left the startup using their slide deck
expose the idea, typically they will they will least illustrate what is the problem that they are addressing. What is the solution that is ready
radically different than anything that exists today
allowing at least 10 times improvement on what is existing already on the market. They will talk about a theme, what brings them to where they are now and why they are uniquely
adept in providing the solution to the market developing it, evolving it and finding the so called product market fit.
They will talk about their go to market strategy, whether it is a business to consumer or business to business, product or service and what are the features that they expect to incorporate in their minimum viable product, how they will be able to get traction, where the first customers are going to be ready to start
Paying for whatever they receive,
there will be an opportunity to talk about projections. And many of these are of course,
answers to questions that will evolve in time and both parties, the startup and the investor understand it. And we don't expect these projections to be perfect. They are always opportunities to ask more questions. Why do you expect to dip in profitability in year three? Or how do you plan international expansion contributing to revenue growth? How is your gross margin evolving through time as you work better and better with infrastructure or suppliers
after the 15 minute expositions
The second half for 15 minutes of network society ventures pitching live is dedicated to the questions that come up in the first part, because I don't interrupt the entrepreneur in their flow, I want to let them finish the presentation first. So we can go back and look at slide number x, y or z, asking question to clarify. A very important point that must be asked even at this very early stage is what is your exit strategy. The investors want to get liquid on their investment. They are different from the partners that stay with you forever with our co founders or employees that can be in the business as well as suppliers or customers for 1020 years as it matures.
An investor especially an early stage investor will want to get out, on average, not farther out than five, six years from the time that they made the initial investment. And whether that liquidity event happens through an IPO an initial public offering, or it happens through a merger and acquisition or actually, as the token base blockchain models mature, and allow faster liquidity to the investors talking about this right at the beginning is of fundamental importance.
Now, network society pitching live wants also to go Of course global in not only the startups that send me directly the
Pitch decks and which I am meeting but also startups that are in other geographies, maybe even only speaking the local language rather than English. And that is why,
for me, finding and training other hosts to run the program is important so that the number of startups that benefit from this opportunity grows in a manner that is scalable and embraces from India to China, to Japan, to South America. many parts of the world where English may be understood, but we are very often startup founders are more comfortable expressing their ideas in their native tongue.
Today, all the videos get automatic captions in an increasing number of languages. And the translations of those captions is also becoming either automatic or affordable through professional services. Or it can be actually outsourced to a community of passionate followers. And why not the startup itself can ask its community to please go on YouTube and contribute to the translation of these captions so that they are message can reach the largest number of possible users.
Network society ventures between live is a new initiative as I am recording this. I have barely started the first few episodes, but you see, I like to act as I preach.
I like to share the ideas that I am working on, even though they are still evolving. And I am looking forward to getting ideas and understanding what you think about this, how you would adapt it to be even better serving the ecosystem of startups all over the world. And then this is it for the context this week. And I am looking forward to recording a new episode and to meeting all of you in the various platforms, we are where we are conversing and when where we are interacting. Thank you.
1:52 Welcome to this episode of the context. Today, I want to talk to you how artificial intelligence assistance or moving from the layer of understanding individual components of the infosphere around them and around us to the semantic layer understanding the meaning and the implications in a broader context of what is the information and there's a consequence what is potentially the, Knowledge that we can derive.
2:31 At a higher layer of abstraction. We have many examples of AI assistance that they after they in an increasing number of situations or helping so that we can work better we can communicate better or that our entertainment choices are better corresponding to our expectations of quality for the time that we are investing in each of these acquitted activities.
3:06 And the AI assistance has become in many aspects superhuman. in their performance In the eighties and the nineties we were trying to build artificial intelligence components with the top-down approach. We would carefully craft rules. That put together would resemble the activities and the reasoning of a human expert. However, this approach couldn't scale.
3:44 On one hand, it was difficult to formalize the judgment of an expert who would too often say, oh I'm going with my gut. And insisting in the interview process would not necessarily lead to. Useful increasing number of rules, that could be formally described in a program. On the other hand when it did happen and we increased the number of rules of our expert systems from a few hundred to thousands these became extremely difficult to debug and they became brittle and they could not perform even if we added the rules.
4:34 We couldn't predict their behavior. Already 40 years ago, there were neural networks that would carefully change the weights of certain connections between layers of variables so that given a type of input they could generate certain output. The simplest example of neural networks is recognizing handwritten digits. Where the number four or the number seven or the number eight.
5:17 As written by several people may not be very similar, but still we are pretty good in recognizing yes, that is a four that is a seven, that is an eight. Computers were not good at all but neural networks were applied and they little by little became pretty good in recognizing handwritten numbers as well or and written letters.
5:44 However, it appeared that their performance would plateau and the really couldn't go from the simplest applications to more complex applications. Originally, this was formulated in a in a in an almost joking manner where people would say well computers are not even able to tell dogs or cats apart on a photo.
6:14 As it often the case, what was necessary is an improvement a real innovation in the mathematical approach of the algorithms that we were applying as implemented in the neural networks. And in 2012 this change occurred. There was a contest for recognizing images based on a database of images that anybody could take and both train and test their neural networks performance and before 2012 this test would when run on a neural network pass or fail.
7:05 But on average would be able to recognize not more than 70 80 percent of the images it would fail 20 30 percent of the time it is a huge number of failings, the human performance is over 1995 percent on the same set of images. When the new algorithms have started to be implemented neural network performance on image recognition, very rapidly achieved and then surpassed human performance and today we have image recognition and image classification on computers that is literally superhuman if you are given.
8:00 A thousand images and you are asked is this a horse is this a dog is this a bird is this a bridge is this a tower the descriptors that you would assign. To those images would be wrong. 50 out of a thousand in the case of computers, it may be half of that or even less than half of that.
8:28 One of the earliest practical examples of this can be found in the photo sharing platform called Flickr now owned by Yahoo. On flicker and then more recently on Google photos there are literally hundreds and now thousands of different categories and each of your photos is classified automatically across all of those categories and what that enables you to do is to start typing and say I want all the photos that have people who are smiling on the beach at sunset out of the photos that I stored.
9:16 On Google photos, for example. And I am giving you that example because I know that out of the over 200,000 photos that I store on Google photos this search. Gives back to photos of my children during the summer holiday. They are on the beach smiling at sunset.
9:46 This is pretty remarkable because previously we would be required to manually label the images and classify them ourselves. Not only you may remember if you do that we use to take chemical photos and then we would store them in boxes and we would write summer vacation 1993. Or whatever other number of the year.
10:24 But obviously after putting a photo in a given box it could not belong to any other box. There was no alternative way of classifying the photos. Another example is managing your music collection where whether it was mixtapes or CDs or whatever we wanted to do. We were required to manually select what playlist the song would belong to what mood that represented and today we have tens of millions of songs in apple music or Alexa or other systems.
11:10 And we can select a mood and automatically a playlist is created that corresponds to that mood of songs that based on the history of the songs that we listen or we skip. We will like. And of course Netflix that has a recommendation algorithm for what is the movie that we should watch next that is based on our previous previous ratings of thumbs up or thumbs down and famously.
11:49 Netflix ran a contest where they asked. The teams of developers from all over the world who could download the data set of ratings and and matches against anonymous users of of Netflix to improve the recommendation engine. And the price was a cool million dollars. And the two top teams joining forces were able to achieve that improvement and take the price.
12:25 So these are. Three examples of. AI systems recommendation engines classification engines that we use almost every day and there are many many others. As the information flow that we either receive or generate increases, we need to increase the ambition. Of our AI systems. We need to aim for applying a higher level of understanding of the topics that we are covering in order to be able to own the data to extract knowledge and to be able to act on it, usefully.
13:21 And rapidly. One of the benefits to the. Supporters on patreon of the context is that you receive the transcript of the episodes together with the episode as well. And many people are are grateful for that because of course you can listen to me for half an hour or so but if you have the transcript you can just glance through the text speed reading or or stopping here and there with your your eye what?
14:08 I'm talking about and in that case, you will be able probably to get a fair percentage of at least an understanding of what I'm talking about without spending half an hour. And I have had people writing to me saying that they don't have half an hour to dedicate but they do have the time that they need to glance at the transcript and do it at a match for much faster speed.
14:39 Now the next step that I am implementing with my content production which in the meantime has considerably increased because on top of the weekly context episode. I am now producing four shows that are not necessarily daily but they are quite frequent searching for the question live in the European and American editions on one hand and the Asia Pacific Edition on the other.
15:30 Hand these are live streams at 7pm CT which is 1pm New York Time and 10am California time and then there is an other one which I just mentioned in a time slot that is more compatible with guests joining the live show from Japan, Korea, China Australia New Zealand in that is live at 10am in the morning European time.
15:48 And then on top of that. I also have an Italian show Qual è la Domanda that is at 3pm live and Network Society Ventures Pitching Live, which I mentioned in the previous episode of the context which allows startups to meet investors in a kind of a startup pitch competition of one presenting their project and then receiving a marriage of aggressive questions and, Pointedly critiquing the presentation, but also highlighting what the potential of the project is, so if I were to do this amount every day.
16:39 And you wanted to follow me you would have a really hard task of looking at something between 20 and 25 hours per week of new material. And even the transcription of these. And and we are doing them is a volume that I I don't wish even the most fervent of my fans to have to go through every time.
17:14 I do have fans who are extremely dedicated. I have people who actually annotate episodes and and underline and and highlight and find correlations and and these are extremely valuable activities. And that is not what now we are starting to support with additional artificial intelligence components. There are and there have been systems of topic extraction for a long time and these were typically very expensive tools for intelligence service units or enterprises that had tens of thousands of dollars per month to dedicate to the task, but of course as it happens the power of information tools and the digitization of our activities It is to democratize over time.
0:00 So that what has been exclusive and very expensive becomes inexpensive and accessible to all this is the process that is part of the approach that singularity and Peter the amount is have been popularizing for a long time they talk about six days of exponential change.
0:23 So the democratization of the access of topic extraction tools now means that with. No money and bigger effort or very little money and with much more user-friendly tools it is possible to start analyzing a given amount of text to highlight correlations concentration of certain types of topics. The absence of correlations or the absence of certain types of topics and many many other queries that can be.
1:10 Both sexually as well as visually analyzed and understood so that very rapidly interesting additional questions can be asked about the corpus that is being analyzed. Now, this is the start of the experience that I am telling you and if you want to follow the experiment with me, you can also check out the tool that I am using it is called Infra Nodus IF INFRA, NODUS.
1:52 Infernodus. And. I still haven't built the complete experience in the tool to tell you whether the value that I'm going to gain and then of course give to all of you is going to be huge or a small amount of value, but of course for me it is also a question of of learning and then applying this learning on how a large amount of.
2:25 Output in my case the video streams can be automatically transcribed and then automatically analyzed. So that the various topics covered can be highlighted and understood. The amount of information that is surrounding us is increasing every day. That is why we need these tools so that we can act on a higher layer of abstraction and we can understand what are the important facts and the important connections between the facts that require our attention and our decision-making.
3:11 These are. Important life-saving world-changing decisions that can be made if the right tools are available. So AI tools are necessary. Without them, we would not be able to act on the amount of information we have and now these tools are available not only exclusively to those who can afford them at the very high cost but they are available to anyone who takes the step of understanding the need of finding the tool of implementing it and experimenting with it and then delivering the value to themselves their community and to others like I am doing and I am showing you as well how.
4:04 To do. As I said this is just the beginning I will give updates further on on how the experiment with this kind of abstractionally layer. AI assistance. Decision making tool is going and I hope you will also enjoy learning about it and I will come with me along the journey, thank you.
I sent it to the rest, thank you.
Welcome to the context. In this episode, I want to share with you what I think is the way that we can train ourselves to be prepared for the unthinkable. Google searches as you can verify on Google's trends for unprecedented or spiking. So many things that we are surrounded by appear to be without precedent and many of us would not have thought that our lives would be characterized by those things that are happening around us day after day.
And in our situation currently really the ability to open our minds and to prepare for scenarios that may not have had a precedent or may not have been a commonplace thought. Is a great advantage. So, What are the things that we think about? Obviously, we are creatures of habit our processes of.
Thinking and our processes of doing things. Tend to repeat what we know. And there is a great advantage in that. As we exercise further the things that we know we become better in them and as a consequence we tend to be rewarded for the reliability of repetitive tasks. However, we are also curious and we explore what can also happen outside of the otherwise well-known path of experiences that we have already completed and this curiosity.
Is exposing us to risk and that risk of course has to be balanced in order to survive our curiosity as we are driven by these experiments that we do and also so that we accumulate knowledge that is somewhat connected to our past experiences as well. So these are some of the components and the parameters that we have to keep in mind as we train ourselves to think the unthinkable.
And of course, it has already happened. There have already been things both in the recent past as well as far down in history when radical changes. Took those who were unprepared completely off their footing and those instead who maybe haven't been able to exactly. Plan for what happened but they were nonetheless prepared to understand that radical changes were coming.
Were able to better leverage them. Some examples of these past changes can. Cover social. Changes civil rights where it was fairly commonplace to be racist. Even just 50 60 70 years ago and for some societies it is still fairly common to be racist. And when society through the the fights for civil rights reorganized itself.
To exclude racism as part of the normal civic discourse and promoted the ability of people of different ethnicities and and what used to be called races to achieve the same opportunities as the previously privileged races. The racists found themselves. Disadvantaged because their mindset was such that their ability to adapt was limited.
And of course. These robing the racist mindset. Was hard if not even impossible. A similar change in our civil society happened with the rights of same-sex couples. Homosexuals were very heavily discriminated against and are still in some countries where being a homosexual is on a fence punished by the death penalty.
But in the vast majority of the countries. The recognition that homosexuals have the same rights as heterosexuals has been recognized and these rights have been implemented in the legal code and as a consequence people had to overcome their prejudices if they had any. In order to be able to interact in a manner that was conducive to constructive outcomes with regards to.
Normal interactions on society. But a society that changed and from their point of view changed quite radically, maybe even unthinkably. And in the past, of course, we had upheavals where the sanctity of. Kingdom where literally the assumption was that the right of a monarch to rule descended directly from God was overhauled through a revolution.
We still have. A countries even modern countries well-developed western democracies that are constitutional monarchies where the the head of state is also the head of the church and rules. By divine fiat. Not very dissimilar from how it was in the age of the pharaohs. But of course people even in those monarchies.
Recognize that this is a little bit of a theatre if you go to England and you ask a hundred people. Do they believe that the queen rules by divine, right? Probably 90, maybe 99 out of a hundred will say now that that's not how I think about it. But in the past that was the assumption and that is how the entire organization of the state would be.
Now today we have been thrust into an unthinkable situation and. My reaction to the. Way that little by little various governments issue their updated. Predictions and their updated rules regarding the physical distancing of their residents appears. Childish. Condescending and counterproductive. They are not entirely to blame. Because the electorate has been told that the elected are right they are kept in a position where they cannot afford to be wrong and there's a consequence they have a very hard time admitting their ignorance.
However in this situation that would be the first to very constantly repeat and remark our knowledge about the nature of the pandemic is still very very limited we don't know the exact origins of the virus we don't know the exact mechanisms of the infection we don't know the exact mechanisms and effects and long-term effects especially.
Of the covet nineteen illness. We don't know the degree. Of immunity that is or is not acquired as a consequence of falling ill and then healing from covet nineteen. We don't know when of vaccine is going to be available the boisterous declarations notwithstanding that would pretend the vaccine to be available within a few months or at the end of the year at most.
We don't know if a vaccine is going to be available. So rather than relying on the unreliable declarations of the local or central government of the country where you find yourself resident whether you want it or not because you are in lockdown and you cannot travel. Exercise your freedom of thought exercise your creativity in.
Planning scenarios for many of these variables in as a wide variety of combinations as possible in order to prepare yourself if one or the other of these combinations of parameters were to become true. Don't fear of being outrageous don't fear of being. Labeled extremists or being labeled a promoter of conspiracy theories.
The thought experiments that you will be running are essential. War. The. Mental health. For what you need to prepare. Let me give you a few examples. Let's start with the origin of the virus. The official version is that it has spontaneously evolved hopping from species to species to evolve to the point where it could infect humans.
Go to the complete extreme without falling prey to the trap of a mental model that has no alternatives. Just assume it as a working hypothesis or the plot of a thriller assume that the virus was designed and spread as a bio weapon. And then ask yourself if that is the case, what is the world that you are living in today?
For example. Is this an optimal attack a sub-optimal attack if it is sub opportunity a sub-optimal what is the definition of optimal and will additional attacks also happen? Abandoned that thought. Reimburse the official definition of the source and the origin of the virus. Remember this is just a thought experiment and it should be absolutely admissible to run these thought experiments.
Let's say another parameter. Assume that the, Immunity actually is not developed because just as you don't develop immunity against the common cold. And you can catch a cold which is caused by a coronavirus different from source to covid sorry. I don't remember the code the name of the virus itself.
But with mechanisms that are able to trick our immune system not to develop the response that otherwise would be needed. So if. Covet 19 even when you heal does not give you the protection and you can be infected again. What is the type of world that you are living in?
Or. And the last example. Think the unthinkable. That just as for the past 30 years we have not been able to design and produce a vaccine for HIV that causes AIDS. Potentially we will not be able to develop a vaccine for covet. 19. And what does that mean to live in a world like that?
In my opinion, it is extremely important. In order to establish an honest dialogue. To start contemplating. All of these scenarios as well as many others. Giving them weights and probabilities revising those weights and probabilities as time goes by and more information becomes available. The more openly the more broadly we discuss about this the better we will be prepared if any of them rather than unthinkable becomes the reality that we objectively share.
And of course the pandemic itself with all its parameters all its variables is just one of many existential threats that we should keep. Our attention if not focused at least a periodically focused on. For example, we are still. Living in a world with thousands of nuclear warheads pointed at cities all over the world and the annihilation of human civilization is only distant.
As a human error launching those missiles or a madman giving an order that is executed by obedient military chains of command. So there are many of these threats. And since you know, now that threats that may seem far and detached from your everyday life have the ability of jolting you out of your comfort zone with very little or no foresight and and ability to to cope beforehand before the fact happens.
Now you know that it is worth a necessary to think the unthinkable about the other scenarios as well.
0:12 Curiosity is an extremely important driver that all of us share when we are children growing up and we have to learn so much about the world abandoning that driver the curiosity that accelerates our learning. Is a great mistake all of us should nurture curiosity in adulthood so that we can keep understanding the world especially in the moments when it is rapidly changing with similar efficiency as we do when we are growing up.
1:01 The assumed wisdom that curiosity killed the cat is like many other occasions where we are quoting over supposedly giving us the ancient wisdom of past generations in reality represents. Memetic programming of social control, if you are not curious if you don't want to explore alternatives if you don't ask on comfortable questions, then you are much easier to keep in your place and the people who can ask those questions will find the answers that apply to them first and the best and then provide the answers to you.
1:58 For those questions even that you didn't know you should be asking. So being curious is necessary component of survival especially in the world today because we cannot count on anything that we learned being constant well. I should say not everything some things are going to stay constant, for example the speed of light or the force of gravity or many other laws of nature that we have.
2:38 Understood and that hopefully are going to represent a certainty during our lifetimes, however many other sciences even contain. Acquired knowledge that is less solid. And even further there are social sciences there are less physical and the more touchy-feely types of sciences where anything that will learn that the things that we believe are sure and true.
3:20 Derived from foundations that are much less stable. Economics is one of these. The way that we design economic models. Is not at all scientific from the point of view of chemistry or physics. It is much more easy to question and often it needs to be fundamentally questioned. The role of individuals as they freely engage in commercial exchange to benefit of both as societies structured themselves in organizations of increasing sophistication and complexity.
4:12 Manifest. Behaviors that cannot be effectively understood with today's economic models. We have to be more ambitious. We have to strive to both apply artificial intelligence computing models statistics even humble tools that are still valid and effective at a fine grained level. It is going to be one of the important uses of internet of things sensor networks to collate and to act on information at the fine grained level where we are able to monitor.
4:59 And understand the flows of transactions mainly among machines where we will be less exposed to issues of privacy and intrusive monitoring of human behavior. So as we are experimenting with changing understanding of what it means to build an economic model and what it means for people to interact and societies and nations to to trade.
5:39 We really have to force ourselves and keep asking important questions of what is the role of the individual in society? What is the benefit to an individual of society? What is the dignity that we must preserve and enhance as society evolves and improves? This must be measurable to and only if we design dreams that can be brought back to reality through the successful implementation of experiments that having proven themselves can be adopted in broader or even worldwide.
6:28 Heels we can. Be relatively sure that the direction where we are going is desirable and if it is not that applying the same method we will be able to coarse correct. Definitely sustainability is one of the big questions of today nation states have failed in addressing it for decades.
6:57 Technology has not. Solar energy is becoming the cheapest source of energy and in increasing geographies it is successfully competing with every other source. Being curious about current solutions in areas like solar energy, for example allows us to upgrade and update our knowledge of these solutions. The lack of curiosity or a curiosity that was present ten years ago and now appears to have been extinguished is what fails the movie planet of humans produced by Michael Moore, which is fatally flawed the movie accuses the environmental movement and more specifically renewable energy is an even more specifically solar photovoltaics of not being able to deliver on their.
8:07 Promises. But it actually displays solutions of ten and twenty years ago which were inferior and not competitive with what was available on the markets in areas such as energy generation or transportation internal combustion engine's were better than electric cars at the time. But. If you are curious you will not be content in assuming that what was true 20 years ago is still true you will verify if that knowledge is valid and you will confirm that it is not that today's solar photovoltaics is competitive that today electric cars are the only rational choice for purchasing vehicle for personal or even in ever broadening categories commercial transportation.
9:11 So this is a very concrete example of how a lack of curiosity can drive you astray and derive conclusions that are not only not applicable but outright. Damaging 8 million people as of this recording has seen the movie planet of humans on YouTube how many of those 8 million people left with the false information that the movie implanted in them how many will have the discipline and the curiosity to falsify the statements of the movie?
9:53 Hopefully many I hope most of them but even though that movie is wrong and failed and displays a fatal lack of curiosity. None of us should feel that we are not allowed to go beyond any of the statements. Or you curious if what I'm saying is right well the world is your oyster in really eating up information and being able to categorize understand synthesize digest and act upon information at an ever more increasing rate of effectiveness.
10:42 In the previous episode we spoke about these very tools of information gathering and topic extraction and the curiosity that I'm inviting you to have is empowered by those tools because it enables you to rapidly come to conclusions rapidly aggregate and assess information that you have available. So this invitation to be curious is practical and actionable exactly because we have those students available and we are lucky to be living in times where not only children can afford to be curious but adults can afford to be curious too and I hope that you will be as curious as I am in finding.
11:39 Answers sometimes confirming our expectations and many times very surprising and very unexpected and most of the times really those are the most accelerating answers. I also want to spend a few minutes inviting you to check out some other content that I have been producing in the past several weeks.
12:14 The context is a weekly video segment and many of you enjoyed. I receive a lot of feedback and I want to thank all of you for providing it. Alongside the context. I'm also producing three other video segments. Searching for the question live where I meet people talking about technology, it's impact on society entrepreneurs science fiction writers, conference organizers and creators like Richard Soulwoman the Creator of Ted size 5 authors the dimension like David bring entrepreneurs like Olga Uscova the founder of cognitive pilot that produces a device.
13:11 That allows agricultural machines to be autonomous and many other people from all walks of life and from all over the world. Searching for a question live streams life on YouTube Facebook and Twitter and those who follow it while we are live can ask questions make comments that is the beauty and the interactivity in a very short feedback loop of of live streaming.
13:48 There is also an Italian version. Called la domanda live and there is for Italian guests and an Italian audience or those who understand Italian. And most recently I started fiction live. Peach and life is for startups that want to meet me as an investor. I receive. Dozens of proposals every day and together with my team we look at the pitch decks we categorize and we analyze them and then those that pass our filters are usually invited for a meeting to present their project directly.
14:41 And rather than doing it just amongst ourselves some time ago. I decided that it would be actually very positive for the entire ecosystem of investors startups advisors and accelerators incubators to do it rather in the open and of course, there will be startups that decline the opportunity because they don't feel that sharing at that level of detail information about their project is appropriate.
15:18 Or there will be others that say I feel fine meeting you one-on-one but I don't feel comfortable for some of the reason they are not ready to do it online in the live session. It already can greatly enjoy the setting where they have 15 minutes of time to present the project typically through the pitch stack and then we have 15 minutes to ask questions where I am leading the questions, but also really receive the questions from the live audience.
15:55 The. Opportunity to produce these videos alongside with the context is really wonderful and I greatly enjoyed so I hope that you will also check them out if you are subscribed to the YouTube channel, you will already be receiving the alerts especially if you turn on the little bell icon and on Facebook if you like the page searching for the question similarly, you will be alerted when we are.
16:33 Going live but as always you can also watch the videos after the live is over when they are immediately made available to everybody online. Thank you very much and see you at the next.
We are upgrading the world and our ability to upgrade the world is being upgraded as well. Think about it. After the birth of the universe matter self-organized in stars and after generations of stars, exploding and reforming planets were formed and on one of these planets life evolved. And then life.
Adapted progressively to various conditions and species changed and during billions of years. Solutions to the various complex problems were developed blindly. And now we are developing solutions to our problems with our open eyes. Being able not only to incorporate the feedback immediately in our surroundings as it is impacting our individual ability to for example transfer our genes in the next generation and as that is being incorporated in the vast number of experiments carried out in parallel.
But also. By being able and look farther. What other experiments are being carried out? What are the parameters of those experiments and how the outcome of those experiments is impacting? Our evaluation of the probability that our experiment will result in our desired outcome. This kind of ability is. Very very novel and very very different from what we have been able to observe anywhere else.
And. We are now not only holding and treasuring disability refining it and incorporating it in our own behaviors and ways we learn we teach we analyze and synthesize knowledge but we are also incorporating this ability in the world itself around us. That is the kind of upgrade that the world is now is being subjected to by our will.
A few years ago a very powerful article came out with a title software is eating the world.
And this article. Spoke about how startups and large companies are using software to achieve unprecedented results and an advantage compared to other companies that are not software based that is difficult or impossible to bridge. And what I am formulating here. Is that this is not simply applying to the world of.
Our economy. But it is literary applying to matter as it is organizing itself. Because the adaptability and the ability to solve complex problems in a world of software defined programmable matter. Is going to be superior to a degree that cannot be bridged. Through any other means. This kind of superiority is going to keep manifesting itself in all kinds of different and unexpected ways.
The programmability of matter. Has always been a feature of the world around us because whether we are talking about chemical reactions that in the presence of a catalyst occur with much greater probability at the much lower energy levels than otherwise or. The fact that. Asserting. Type of molecule that we call DNA is able to duplicate itself in the presence of the right type of soup of elementary components and the fact that this kind of.
Molecule is. As it is being reproduced expressing what we call our bodies the phenotype corresponding to genotype the expression as the molecule is giving rise to the organisms. That has been already a degree of programmability of matter around us. But today we are really bringing. Solutions to the world that are becoming adapt at re-evaluating their own nature and changing that nature in order to be better than they were the yesterday without waiting for.
Chemical processes through billions of years finding new ways of synthesizing certain elements and that can take advantage of certain catalysts without waiting for life through millions of years to come up via a chance with a new species whose molecule at the basis, it's DNA express is a body that is.
Fit for a particular purpose. No this kind of programmable matter is able to reconfigure itself to be fit for a given purpose in a manner of months days hours. In a manner of months if we include in the system our own ingenuity and our desire to come up with a new generation of that particular object.
Leaving or non-living makes no difference as we select a given type of animal breeders that won't happen in a matter of months, but in a few years yes or as we design our next generation smartphones and the new capabilities of the smartphones are now better than the previous generation.
And that smartphone of vs hardware. But the reason it is different and it is better is not because the hardware is fundamentally different but because of the software capabilities that have been incorporated. And. If we. Design the hardware in a new way. Then the improvements can happen rather than in a matter of a few years or a few months they can happen in a manner a few weeks or few minutes because we are already observing around us objects that have implicit abilities that are made powerful via software upgrades that.
Are installed in that express this new capability. A couple of examples. We have powerful eyes in our smartphones the photo cameras. And under certain circumstances, these are better than our biological eyes. Has it ever happened to you that some? Written information a sign may be on the street across and you didn't want to walk in the traffic was too small to be read and that that point you pointed your.
Phone and you drag with your fingers in order to increase and lodge the size of the sign to be able to read it. My eyes consume can yours but there are phones zoom is such compared with its resolution that it enables us to see in real time something that otherwise we wouldn't see or.
Has it ever happened that you took a photo in the dusk and your biological eyes were already settling in the nighttime vision when colors start to disappear because the way that our eyes work when we are adapting to lower light conditions the particular sensors in our eyes that are adapt at recording color don't work that well.
But the photo that you talk your that you took with your smartphone. Was. Not only well lit but it was also full of color even if your natural perception wasn't able to record those colors. Or. When a new piece of software is released and the kind of 3D photo that the camera is now able to take with multiple exposures or high dynamic range or the ability to create mosaics and panorama and and other types of magical representations of the world.
These become possible with the same hardware radically. Broadening the possibilities. And the other example. Is another piece of hardware a computer in a class of objects that we didn't think of as computers in the past. Our automobiles and the most explicitly information-based automobile that we have today the Tesla car.
Where periodically a new version of the Tesla software is released delivered to hundreds of thousands of cars wirelessly installed overnight and the owner of the car the morning after finds new capabilities in the car. Capabilities that for example, give the car. Along a range. When did it ever happen?
That a car could from one day to another acquire the ability to go far without any harder change. Or. The ability to generate substantial economic value to its owner as it is going to happen when. Soon enough. Tesla is going to turn on Tesla network, which is a car sharing network and it may be impacted in certain ways by the pandemic because we will be less prone in sharing our cars with others and I don't know what exactly the adaptation required is going to be.
But whether through, The ability of somebody else to get into the car and drive it or which is also an additional software upgrade expected. Soon enough the full sales driving ability of the text Tesla car so that somebody could sit elsewhere rather than in the driver's seat and the car will bring the person to her destination, well these are completely novel functions of a software defined object.
And today. Only the most advanced companies are thinking about their products like this. But these are just the seeds. Being born of a huge revolution. As software defined objects are going to dominate. The future with their unique ability to stay fit in complex changing conditions delivering value to a network of ecosystem participants.
And. Outcompeting any alternative solution in more and more. Sectors this is going to be the paradigm and any sector. Dominated by players that are slow in adopting this approach is going to be disrupted by new entrants, whether they are already in that industry or they are now coming from completely unexpected areas.
And then of course. With. The fairly recent ability of software riding itself where it is not human coders anymore that painstakingly line by line or. Writing the the code on keyboards but the role of the software engineers is to shepherd the data and the evolution of algorithms as billions and tense and hundreds of billions of parameters are adjusting themselves in order to.
Generate the desired. Functionality. That is the power of machine learning and those. Objects that incorporate their programmable features through machine learning are going to be more and more capable of expressing an even greater degree of adaptability both in terms of how far they can go but also in terms of how rapidly they can achieve that.
So. You have to. Refine your perception. Your pattern recognition in order to home in whether we are talking about. The simplest of the examples today a smart speaker that has tens of thousands of abilities that were not there when it was put on the market originally because it is a softer defined object.
Something that was was used to be called simply a car and now it is a supercomputer on wheels that is going to re-evaluate what it can do constantly through our product. Or communication devices that are. Gamifying or behaviors through the tents and hundreds of unexpected applications that we install and use daily on them.
These are just the first examples and as we keep upgrading the world around us. Your ability to recognize and to take advantage of this is going to be crucial as an individual. As an entrepreneur or a leader in your business but as a thinker in a society of tomorrow.
A few days ago. Emil suggested that we should update the cover image of the patreon page that many of you are familiar with the new image is a beautiful photo of the Manhattan. Skyline with its skyscrapers in at dusk and I really like it. I insisted that he should provide me with a URL of the source of the image so that we would look at the license and we could also give credit this is what I would like to talk to you about today, licensing and crediting and why this matters so much in today's world.
In another episode of the context when I spoke about the open source medical supplies initiative. I touched upon some components of intellectual property but I especially spoke about open source code and and reuse today. I want to talk about another component of the ensemble that is called intellectual property just to summarize this is trademarks patents copyrights and trade.
Secrets today what I want to talk about copyrights that are quite famously broken and there is a wonderful YouTube video that we will link to that talks about how broken it is on YouTube, for example, and I had a recent experience with this as well. The song that we picked for the intro and the outro music of searching for the question live and the network society preaching live two of the other shows that I'm also producing.
Were. Listed in their description. That they were free to use and apparently they were uploaded by the original producer. And we followed the requirements of crediting them, however, apparently since then they. License the these songs to another organization and that organization gave the fingerprints of the song to YouTube in order to flag those who are using it as if they were infringing on the copyright of the the original composer and and, I am, you know, not contesting that they are right or wrong and the consequences at least for my channel are not very negative because they simply accrue the advertising revenue on those videos that have the song rather than that revenue coming to me except that my videos are not monetized through advertising so they get nothing anyway just as I have gotten nothing in but the point is.
That the the system is is broken. The there are a lot of reasons why from a legal point of view copyrights are are really out of check they last too long after the death of the original copyright creator and many other reasons but they are also broken because they are out of sync with today's age we cannot.
Let such an important component of the digital world to be so detached from. What can be efficiently handled by our computers themselves?
About 15 years ago. I would have to check the precise date and apologies if I am a bit mistaken. Creative comments was born founded by Larry. Lessig creative comments aimed to establish prenegotiated legal agreements between copyright holders and content users to make sure that there wouldn't be a need for individual negotiation that is completely unfeasible for the use of a piece of content in various manners unless you are a deep pocketed hollywood studio you.
Are. Not going to be able to clear the rights to the use of every piece of image of every footage of every copyrighted thing that appears in the videos as well as for the audio of and for all those other rights that encumber the content that you are creating.
However, if there are pre-agreed rights attached to those pieces of content, then it is very feasible to decide that yes indeed you can take it. And use it. You can modify it and you can make money through it. As long as. In the credits you recognize the original creator or the copyright holder.
And the various types of licenses that creative commons in design and then updated and also made available under local legislation in many many different jurisdictions. Provide a wonderful means for liberating content and making content thrive in the world in many many different ways. For example, my book something new AI's and us published in three languages is available under the Creative Commons attribution license, which means that you can go to Amazon and buy the book or you can get it for free.
And the choice is totally yours. Because as Cory Doctor of uses to say I also agree that for an author obscurity is a much bigger threat than piracy I am not going to suffocate the content that I created under antiquated copyright regimes in the fear of somebody being excited about my ideas without having paid.
Their due to to me or to my publisher. And I have to recognize the the the modern thinking and the flexible understanding on the side of my publisher who agreed to these terms and now. The. Availability of these legal agreements is just the first step. Part of the genius of creative comments is that the agreements are also available in machine readable form.
They are available as code so that there can be systems for incorporating the code. So that the rights associated with a piece of content can travel with the content as it is for example on a web page and the webpage can incorporate a little icon that links to the creative commons website so that people reading that webpage know that they can copy the entire page, for example.
In and. Search engines such as Google can index content based on the kind of license and this has actually been done you can go to Google and search for images that are available under a license that allows their reuse with modification under commercial terms and these are the various layers.
And and and that is great and often when you go and and you do these searches you find for example images by Steve Jarvitsam well at least I do because I search for various kinds of technological things and Steve for the past 10 years has been taking photos and you know, they are quite nice photos and publishing them on Flickr under the creative commons license, which Google is able to read so Google makes available Steve's.
Photos under the same license as well. This is wonderful. Steve himself is is a is a venture capital investor an early investor intestine SpaceX and many other wonderful companies he doesn't live from his photos admittedly but at least in my world he is also famous for his photos and I don't know if who would he would accept but you know, I would hire him for taking photos because he's taking cool ones with with with nice angles and shooting and a good thinking be.
Hind it and then publishes them with with exciting descriptions and and so on. Now the searches of photos give you an image and here is where I want us to further embrace and and this is really urgent the the thinking behind creative comments because it is not enough for for example.
Wikipedia to have their images licensed under a very liberal creative comments like, License many or most of them under actually CC zero which is equivalent in most jurisdictions to public domain basically you can do everything and anything you will you want with the image and you don't even have to give attribution to the original creator or or copyright holder.
However, the problem is that when you download the image just as when you download a piece of music as an MP3 file or you reuse some video the history of the rights attached to that piece of content to often get lost and you are not able to both track back and understand.
The origin the license of that content but also you are not able to prove that you are legally allowed to do what what you want to do and and this is a very dangerous because copyright infringement can be not only meaning that your YouTube channel gets shut down copyright infringement can mean that you are sued like my friend Michael Robertson by a hundred different companies for.
Their their music being. " stolen or that your house is swatted by teams at 3am in the morning like as it happened to Kim.com in his home in in New Zealand for copyright infringement. In.
Or at least you are fine for thousands of of dollars or tens of thousands of of dollars for copyright infringement, but it is also a huge muzzle on creativity, especially if we are talking about creators who released their work under a liberal copyright arrangement to start with. If I want to protect my work in a manner that nobody else should.
Touch it and only I can decide when anyone can see it that's one point and and then it is up to me to enact that protection as deeply as I can but if on the other hand I want my content to freely travel. I don't want that freedom to be hampered by the inability of the platforms to attach these freedoms to my content in a reliable manner.
So the method data the description of these rights should be in the files. JPEGs MP3's MP4s for videos and every other possible content PDFs and so on. And in some cases are there, but also these rights this metadata should be. Checked against the. Blockchain of content proofs, that should.
Secure and confirm that a given piece of content is indeed able to enjoy the rights and freedoms that the metadata incorporated in its file is representing. And the blockchain as I hope you all are completely aware is a public immutable repository of information where anybody independently can verify that what is recorded is indeed corresponding to the common understanding of truth.
And the, Fact that this is this is not happening is what is causing a lot of headaches for for YouTube and YouTube creators and content owners and content in a remixers and it is totally reimpoverishing our culture it is hampering innovation and it is making our economy our digital economy.
Less efficient and less capable of supporting us. And these days the digital economy is what should thrive in anything that slows it down is immortal danger to civilization and to the individual ability to to make a living so it is even more urgent for these things to come together there are a lot of projects on blockchain that are trying to to do this as far as I know none of them have gotten traction yet if there are projects that you would like to point out.
To me. I would love to learn about them and of course you are welcome to visit my patreon page where the new image is proudly displayed it is not licensed under creative comments it is licensed under the pixabay license pixabays the site where we downloaded it from and it is basically saying you can do it whatever you want as long as you don't sell the image itself and you don't do anything unlawful.
With the image like disparaging a person appearing on the image or or things like that. So it is a very permissive license and yes creative commons is one approach there are many others many many different kinds of licenses just as for software similarly for content whether images or sounds or or videos.
But we have to hurry up we have to make sure that machines understand the licenses that machines can transfer and and manipulate those licenses appropriately without forgetting about them and we have to have verification mechanisms so that like in the blockchain we can independently check. What what is going on?
I believe this urgent especially today. If you go to the patreon page, you will notice that I introduce the new tiers with new benefits to patrons and supporters the four tiers are today called fan supporter sponsor and benefactor. I invite you to check it out and become a an official patron in one of them if you are an already thank you and see you next time.
What is your expertise, what is it that you are selling. Are you selling a product or a service, then you are an entrepreneur. Are you selling money and buying equity, then you are an investor. Are you selling your expertise, and your time, and you are a consultant. Maybe there are other ways of providing value in the ecosystem as well. But these are three fairly well understood ways of going about measuring generating and deploying what you have and what others might need.
So, I have had an experience in two of these. And now I am testing the third. I have never been a consultant. I have never sold my time or expertise. But recently, I decided why not. And I'm doing various kinds of experiments. So, for example, I receive a lot of pitches for people who are looking for investment. And of course, together with my team we analyze, which are the best. We invite those who are ready, in our opinion to pitch. And recently, you may have noticed, we turned. These pitches. Not all of them but many of them to be live and Network Society Pitching Live, which is alive on Pitching.live as a website is a fun way of making these sessions, Interactive, an opportunity to learn for many other teams. How to page, what mistakes maybe to avoid.
And it is also a great way for the team who comes on board to make themselves known. And of course, It gives me the opportunity to ask the questions that I would be asking anyway. So, I decided to also allow teams that maybe are not ready for pitching yet for whatever reason to book a pitching review or project review session with me. And it is really wonderful to see how smooth. The process can be thanks to tools that are available immediately and to anybody. Rather than having a lot of friction in receiving the page booking the appointment, getting paid, and so on. The whole thing is set up with just putting various tools together. So, the pitch deck can be sent via email or uploaded in a form. The appointment can be booked via calendly. And the payment for the time that is booked can be immediately made via a credit card. So, the form is integrated and everything really works very well. I am also offering a money back guarantee on the 90 minutes session where 30 minutes. The
project is presented and described 30 minutes, I give feedback. And then 30 minutes, we discuss what can be done what are the next steps and things like that. And, of course, this can be adjusted as needed. The session is recorded transcribed and the project receives the recording and the transcription, as part of the value that is delivered to them. They can come, or just one person, it is of course recommended that the CEO is the one delivering the, the pitch or describes the, the project, but up to five people can join in the same session. so that many more points of view can contribute to making it really valuable for them.
And then, and once again. I really want to make this easy for anybody to accept and to go with it. And that is why I am offering this 30 days, no questions asked. Money Back Guarantee. I said 30 days it's actually not 30 days whenever whatever doesn't matter. And the 30 days money back guarantee is a standard right in retail and in many other ways mail order online, if you buy something from Amazon, you can return it.
I think it is a great way for expressing appreciation for the business that you are receiving on one hand, and confidence in the value that you are providing, on the other hand, many years ago when I started one of my first businesses, it was still in packaged software, right, and packaged software was sold in boxes and the box would contain either a floppy disk at the beginning or the CD. Later on, and there would be quite an obsession, about piracy and winter, people would abuse. The software license in some way. And I remember that at the time in the 90s. In Italy, I introduced something like the first time, and no questions asked. Money Back Guarantee. On the software that was purchased, whether directly from my firm or through the distribution channels in the stores. And this was unheard of. To the point that the stores had an official policy, not to take back any software package that was open. You could only bring them back unopened software. So I guess, even though they are
required by law to have either seven days or maybe already 30 days, is that you would go home.
And then you would have buyer's remorse, and you would decide that you shouldn't have bought. What you bought, and you would bring it back. But in the meantime, you would just put it on the shelf and contemplate it, you would not touch it. Well, isn't that crazy isn't it pointless. Also, software piracy was easy. It is easy, and it should be easy to find software. And to start using that software. Today we call it Freemium, you start using something for free with certain features. And then you like it's so much and you want more features that you start paying for it. At the time, it could have been called shareware or trial where, where you could download the software and use it for 30 days, and then you were supposed to start paying for it if you kept using it.
why would it be a problem to do the reverse that you buy the software you pay for it. And then if you decide that you want to stop using it, you get your money back. The classical objection is that. Oh, it will be abused, people would ask for their money back, but still keep using the software that they the, at that point, don't have license for they didn't pay for. In my opinion was at the time and it still is that it is worth trusting your client that is worth trusting the customer, because it is a beautiful way of opening yourself up to a new relationship. And this is, this is true. I would say most of the time, if not all the time, not only in sales, but in life. And yes, there will be those that maybe abuse it, but you learn from them as much or maybe more you receive value through that.
broken relationship will teach you how to adapt and maybe how to deliver more value, so that it is impossible to abuse because the continued availability, for example, of future value is going to be the one that guarantees that nobody will want to break it. And that was the experience, back then 30 years ago, practically, nobody. Not one in 100 not one in 1000 not one in 10,000 word abuse. This kind of thing. A, and when somebody really wanted to get their money back, we would do everything possible to make sure that they got their money back, whether it was $50 or $100, or 5000, or $1,000, there were those expensive software packages in retail stores too.
sometimes we had to ask the customer to please go to the retail store and then have them call us so that we would confirm that yes they could refund the customer, and then in turn the retail store would be covered by this guarantee and they wouldn't lose money. Eighth was great. So, based on that experience. I fully believe that I can apply the same today in this kind of project review and pitch review service that I am offering. Two startups and two projects. So, I am experimenting in, and I am very curious to how it is gonna go what is going to happen. and then iterating and maybe scaling. Except that, of course, the traditional consulting business is kind of the opposite of scalable, because your time is measured by the money you receive. So you kind of tell your client. Oh, by the way, I am going to deliver the value rather than in 90 minutes in 10 minutes but please pay me the same. So, it will be interesting to see what this is going to mean, I do believe in scanning. I do
believe in automation, I do believe in the fan keep delivering higher and higher levels of value and justify an increasing economic relationship, rather than a decreasing economic relationship. Not everybody believes that this is possible. I was, for example, astonished to learn recently that Upwork kind of threw in the towel Upwork, which many of you will be familiar with is a pretty good platform for finding freelancers, either directly or through agencies, and to engage with them from copywriting to code coding developing applications for mobile or or
computers to designing logos or designing entire advertising campaigns. And you have to go through the process of being able to articulate what you want. And of course that is in itself a very good exercise because it is a very common experience for consultants that there is a deep misunderstanding between them and the client of what the client is is describing what the consultant is understanding and so on. There are even, cartoons about the ridiculous levels of misunderstanding that this can imply. And then the second part is that if you complete the jobs job description, and you post it. You will be flooded
with with offers. And I used it many times and it is a common experience that you describe a job as the morning after you have hundreds of candidates. Ready to compete for the job. And some of them will be very easy and very fast to be discarded because they are robotically apply and without even having understood, or maybe even having read what the requirements were others will be very good matches. And then of course you will have to decide, do I pay more, because I believe I will get more value or do I pay less. And then there is an interview process with the final candidates. And there can be some tests. For example, I like to run a paid test. If the engagement is continuous, then you can reasonably say to the candidate Listen, why don't we do a test I will pay you for it. And then based on that we will be able to better understand if this thing is going to work. For example, copywriting can be done like this, and then you will know if the person writes in the kind of tone that
you want. If you will be able to really rely on the person doing any kind of research that is needed.
final step of course is to decide who you want to hire for that particle job, and then work with them. Assign milestones, pay them regularly and the Upwork platform supports all these and many other processes as well. Now, what I didn't know. And I discovered recently is that in order to not have to discover and understand how freelance consultants in freelance developers and copywriters and logo designers and all of that. How can they add value in a globally connected marketplace where there is a clear and intense competition from Pakistan and India and China. With thousands and 10s of thousands of eager and qualified candidates for every job. Compared to much more expensive freelancers in the US.
separated the marketplace. It is not a global marketplace anymore. You can list, jobs that are us only in a kind of a protectionist perversion of what should have been a global marketplace. And that us only job is going to be paid at us rates, and only us locations can apply. It doesn't matter. For example, if you are a US person living abroad, and you are a US citizen. You cannot apply, you have to be in the US reside in the US. And, and then you can you can apply. No, in my opinion, this is a huge defeat. It is a defeat for the marketplace, but it is also a defeat for us freelancers who are weakened by this kind of protection, because they are not forced to understand what kind of value they can add the global competition is a reality. And it is not going to go away, because this competition is, is there or because of this protection, being there.
I am not planning to do that I am not planning to diversify my rates and make it more expensive for the US less expensive for for India or China. What I am planning to do is to search for value, keep searching for value. What can this be well, for example the recording the transcription of these meetings, and under easy value that you know because you have been following these episodes of The context is the topic, analysis and the chart, and other will be the collection of every reference and URL and source and report and PDF that comes up during that meeting, or further, and so on and so forth. There will be many others, I am sure. So, these experiments are fun. I am looking forward to engage with interesting projects that recognize the value, and maybe they see it as a stepping stone for me to become an advisor or a mentor, and maybe also an investor and. Is it a traditional kind of approach that you would be paying the investor before they start investing. No, not at all. Will some
be incense and and will they absolutely categorically refuse this kind of relationship. Totally. And that's fine. A bottle of course, there can be many interesting things that are being born from it. Anyway, so that is why I'm going to do the experiment and maybe update you in one of the future episodes of the context, to see how it is going. In the meantime, if you haven't become a supporter on Patreon. This is a time to go and check it out. There are four levels to become a member of my Patreon community, a fan, a supporter, a sponsor and a benefactor. And these are at different levels of economic commitment, and I articulate on Patreon and you can read it in detail the differences of how I share content with you. I share my attention with you. I share my knowledge with you in I create for you. These are the different things that happened, I had the different levels of mutual engagement and mutual relationship. So, let me know. What do you think about that as well. And see you at
the next episode. Oh, another thing. You may notice that the background is different. I am in front of a green screen. I had this green screen for a long time but I didn't end up using it for many reasons. My team and I are thinking about the next season of The Context, and how to upgrade the production quality and the production values of the context. And this is part of that. A. And what you see is not yet, the graphical design and all the identity and all the value that we are planning to put into season three. But it is a hint in the direction where we are going. So, this is the last thing that comes into my mind and see you next week.
My friend Scott Mize died last week from the consequences of a stroke that he received while walking around San Francisco a few days earlier.
Maybe because we die for the first time, breaking a chain of life billions of years long, it is shocking. Or maybe my life is particularly lucky, and with the exception of my father who died young, death of friends and close relatives was not that much part of it.
Many of us want to live as long as possible, and are looking forward to science and technology to progress and allow healthy lives to last even longer. Radical longevity with human lifespans of hundreds of years or more.
There will be a point in time when we will truly acquire the ability to choose if we want to die. Both as individuals and as society we will learn to wield this power over death.
In the meantime, make sure all the right people know what you want, with what is possible today. Scott was able to let his sister know that he didn’t want to be dependent on machines to keep him alive, to breathe in his case, and the hospital was able to accommodate his desire.
The best is to look at life, and make sure that as you age, you are able to look back without regrets for the things that you didn’t do. Taking risks, being curious, learning, loving, all of this is what living is.
Are you an introvert or an extrovert. And did you decide that this was the right box for the category that you could well inhabit. Or was it coming from others. Who told you that you were. I would like to propose that these being two extremes of a black and white classification. It is much better. Instead, to look at the spectrum, and accept that we are all ambiverts. Being an ambivert means that you can be an introvert or an extrovert, depending on the circumstance, depending on the situation or depending on the particular moment in your life. If you think about it, and you burn baby will be necessarily an extrovert as the baby is born. It will have no qualms about expressing, whether it is hungry or thirsty, or happy. Whether it feels Glee, seeing the mother or the father, or is afraid. There is a movie from I think the 80s,
isn't quite nice movie called the carabiner quickly. It takes place in Australia and one of the most dramatic scenes. Spoiler alert is the recollection of the female protagonist, as she inadvertently kills her baby because she's crying and the enemy attackers are coming, and she's afraid that the baby's cries will alert the enemy to their position. And so she is trying to hush, the baby. But it won't stop, and she suffocates her and crushes her.
is an extremely dramatic representation of a very natural evolution of this evolutionary a position of being an extrovert being limited by the outside circumstances. Learning to stop crying for a baby is important, and smart parents will know that the communication can be modulated. If you rush to pick up the baby every time it cries it will keep crying all the time, because that is what it works. But if you let the baby cry a little bit very very rapidly it will learn that crying all the time. serves no purpose, so it will really use the communication tool when it matters most, not all the time, unnecessarily. And modulating this communication is important, because communication costs, energy, both expressing what you need as well as listening to the environment and absorbing what the environment, tells you, winter at the dinner party and winter, when you are tracking in nature. Whatever it is, in this communication is an important expenditure in, in energy cognitive and physical
energy as well. So we learn how to tone down the filters, how to increase the fielders, sorry, and and lower the communication bandwidth that we have with with the world. And as we learn that some of us will raise the filters. And then leave them at a relatively high setting, so that they feel to others, somewhat impenetrable and less, ready to share their emotional states, or to show their emotions or to participate empathically in the emotions of others. And then we labeled them introverts, right. So, these modulated communication processes are what gave rise to the labels extrovert and introvert, and they are both biologically based, but also culturally learned, and being able to be self aware of how they work can be extremely useful and ambiverts, as all of us are learn how to do that, learn how to raise the filter or lower the filter, communicate more explicitly or more implicitly withdraw from interaction or jump, more readily into the crowd and and interact gleefully and at
So, in order
be able to navigate the information flow. This applies to our digital means, as well, because the digital information flow whether it is in the output that you generate, or it is the input that you collect is the kind of communication that requires energy, in order to be managed. And there is a time for being more extroverted in communication. And there is a time you know there for being more introverted. In this communication, maintaining the right balance dynamically not statically being at one extreme or the other extreme, but very very dynamically, and only through this dynamic balance that you can maximize the value that you both, generate, as well as the value that you receive and understanding that respecting the result of these evaluations both as signals come from others as well as you understand your own evolution of internal states is extremely desirable. And it can lead you to important
new ways of
expressing yourself. On one hand, or to interpreting the messages on social media or on other platforms. As an example, a few months ago, I received a message from a follower, saying, Hey, I love your content, but I cannot dedicate an hour a week to listen to your videos. He was slightly exaggerating because at the time. I had just the context which is about half an hour a week rather than an hour but I understood the point and I responded that the patrons on Patreon, receive the transcript of these videos. So if somebody wants to read the material rather than watch the video because they can more rapidly absorb the information content, they can do that. And that is what led me to thinking of how to further in alternative ways represent the information that I am creating, and hopefully the knowledge that I am trying to share. And it made me search test, and then adopt what the topic charting platforms could be and currently I am using one of them called infrared odors, and you can
find the topic chart. In each episode of of the context. And I am going to keep searching for ways to allow the people who are going to absorb and interact with what I output in a flexible manner, according to their needs. The reverse is also true. I am constantly moving between listening to podcasts, watching YouTube videos, reading a short online post downloading 2030 page PDF and and these require different kinds of of interactions, let alone, of course, the unending zoom calls, either initiated by me, or set up by others. And it will be very interesting to think about. If and when neuro link. The Elon Musk project that wants to create brain computer interfaces. In order to increase the bandwidth of communication between machines and humans succeeds.
these dynamic filters of introversion and extraversion openness to communicate and to absorb and more focused and concentrated attention, that is naturally needed in order to be human and be able to own one's identity and create it and nurture it in a healthy way. Well, how will that work, being able to see those settings, intervene in them, or have some kind of mental filter that allows you to understand that those settings are adjusted in a manner that is good for you as you go about your life is going to be really, really important. So, is this going to apply to artificial intelligences as well. I believe it will in different ways and the different degrees. But there will be natural limits to the cognitive ability at any stage of evolution of AI eyes, and their curiosity will drive them to push those limits, and to arrive to a point where they will go like, Oh, I cannot stay at this cocktail party among eyes anymore. I have to withdraw a bit. In order to think about everything I
heard, I would just sit in a corner. And if somebody wants and another AI. Even if attractive codes we do dance, I will say No, thank you. And just smile a little bit, but my body language will indicate that no is no and thank you I don't want to go to dance with you even if you are an attractive AI. So, obviously the anthropomorphize ation and the description of this hypothetical scene has its limits. But the self referential nature of degrees of abstraction that are fractally, similar to each other. I think will correct rise in many ways, AI and AI societies and AI systems as well. They will be not extroverts, or introverts, just like humans, they will be ambiverts as well. So, with this episode. A I am concluding Season Two of the context. I hope you enjoyed it. We will take a few weeks off, not to do nothing, but to do many, many other things, but I will not be publishing the weekly episode of the context for these few weeks. Why, what are the things that we will be doing well. I
am going to keep recording the modules of the various units of the jolting technologies seminar series that I am offering to enterprise this. You know, whether it is a high price or not. Certainly a price that individuals wouldn't be paying. And if you are interested, you are welcome to reach out. If you have a corporation that could feel the need to better understand how the increasing acceleration of technological change can impact your business models. Hit the limits of the adaptability of your organization sooner than you would have expected. And what we already are seeing around us, whether it is the increased acceleration of AI also documented by Stanford University. Whether it is the power of quantum technologies, whether it is the biological evolution hitting us in the face in the form of a pandemic and asking, forcing us to ask ourselves, are we ready answer is no. Do we want to be ready for the next one, hopefully, the answer is yes, and so on.
So that is one thing. The second thing is that we will be preparing Of course, season three of the context, which we want to do with increase the production levels, and hopefully we will achieve that, the green screen is just one component but there will be many others, and my team and I are working on designing what the new things are going to be, and many other things. But in the meantime, obviously I will be interacting with all of you on Patreon. Whether you are on the fan on the supporter on the sponsor or the benefactor level, and I am going to be looking forward to and learn from you, so that the next season of the context, can be even more valuable than this season two, or season one. Because you can watch all the episodes on the YouTube channel. So, see you soon. And thanks for all the fish.
Welcome to the K&L Gates seminar series. My name is David Orban, and I am your instructor.
I am passionate about technology and its implications for business, for individuals and for society at large. I'm a speaker and author and an investor, and I will be pleased and excited to be able not only to allow you to learn hopefully many interesting topics, but also to interact with you in our live Q&A session. And so I invite you, as you watch this video series, to make sure that you formulate explicitly your questions, and you let us know about them because we want to collect them. And that you also vote on the questions that you find more interesting in that have been submitted by other participants.
The K&L Gates seminar series has been organized in order to make sure that the accelerating pace of technological change doesn't catch you by surprise. Rather since this can't be guaranteed that you are as well equipped as possible to face the unavoidable challenges that this change is going to bring in the seminar series, we are looking at several different technologies characterized by their power and their ability to disrupt existing business models.
In this particular unit, we are looking at Artificial Intelligence and how Artificial Intelligence is especially a cornerstone technology of the 21st century, as the acceleration of technological change enables us to build extremely powerful applications across many different industries. The unit is organized in several components that you can watch one by one or all together. They are organized in a playlist, so you can watch one video and then stop, and then watch another video and then stop. And each of these video is pretty short, so you should be able to consume them at your leisure. There's a certain logic in the sequence of the videos, so you are encouraged to watch them one after another, but you can also jump around. We will start with the definitions of Artificial Intelligence, then we will look at what is the history of AI very briefly, also concentrating on the most recent developments which are of course the most important. We will look at why exactly now AI is exploding
in the way that it is. Specifically, the hardware, the data available, and the algorithms that make the difference. We will look at consumer applications and examples, as well as those platforms and applications that are used by enterprises. We will look at how developers can rapidly embrace the power of artificial intelligence in a manner that is lowering the barriers to entry, so that the broadest possible range of corporations can adopt the toolset and independent developers can also incorporate it in their applications.
AI is a hugely attractive and popular research topic, and we will look at what are the frontiers that are being explored in the scientific community. There is a lot of controversy around Artificial Intelligence, and as such we will look at what are the challenges that the rapid deployment of AI applications can imply. Why is it so important that we are aware of the consequences of society and corporations deeply embracing artificial intelligence.
The implications for the legal profession of course are huge. And we will mention some of these implications, both in terms of how the clients of law firms expose themselves to new challenges, as well as how the legal profession itself can use artificial intelligence tools in order to conduct its business in the best possible manner.
When we talk about an important and complex subject like artificial intelligence, it is useful to start with a clear definition. There are several that we can use to define Artificial Intelligence.
For example, starting from Wikipedia, that says that: "Artificial intelligence, a subset of computer science, is intelligence demonstrated by machines in contrast to the natural intelligence displayed by humans."
Encyclopedia Britannica says that "AI is the ability of a digital computer, or computer controlled robot to perform tasks, commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience."
The European Commission published a paper on artificial intelligence, and in the introduction they also proceed to define what they mean by AI. "Artificial Intelligence refers to systems that display intelligent behavior by analyzing their environment, and taking actions with some degree of autonomy to achieve specific goals can be purely software based, acting in the virtual world, voice assistance, image analysis software, search engines, speech and face recognition systems, or can be embedded in hardware devices, advanced robots, autonomous cars, drones, or Internet of Things applications."
We will see several of these examples, but why don't we have also a little bit of fun. We can proceed defining artificial intelligence, with AI itself using a program that all of you can also test. It will complete a paragraph of text, after being prompted with the beginning of a sentence. "Artificial intelligence is defined as..." and it will proceed to write "a system that can process information and understand it, so that it can better recognize the problems of our own and help us understand it better" Not perfect, but not horrible either.
AI is a field of research and applications where the practitioners themselves, proceed to redefine what AI is periodically. Sometimes it actually looks like they have a checklist of everything that only humans can do. AI is straddling the frontier of things that it still cannot do. And then as soon as it is able to achieve one of them, it looks almost magical to the non specialists. Until, as we get acquainted with this new ability which gets incorporated in a wide variety of applications, we will stop considering it a frontier application.
In today's world there are many approaches to AI, but the most popular and most astonishingly successful approach is based on machine learning. "Machine learning is a subset of artificial intelligence, and it is the study and implementation of algorithms, and statistical models that computer systems use to perform a specific task, without using explicit instructions, relying instead on patterns and inference."
A further subset of machine learning is deep learning. "Deep Learning is a class of machine learning algorithms that uses multiple layers to progressively extract higher level features from the raw input."
This kind of successive and progressive layers of abstraction is featured in many examples that we are showing. Sometimes we can intuitively recognize the type of reasoning that, anthropomorphizing the behavior of the AI system, we believe it is completing.
This is somewhat unavoidable: we tend to attribute human like characteristics to AI systems. In general, however, our ability to explain how and why the AI system came to a given conclusion is a real challenge. The European Union is dedicating a billion euros to research programs for explainable AI.
"Explainable AI refers to methods and techniques in the application of artificial intelligence, such that the results of the solution can be understood by human experts, contrasting with the concept of a blackbox in machine learning, where even the designers cannot explain why the AI arrived at a specific decision."
As AI systems become more widespread, more powerful, and ever more autonomous in their decision making, it will be important to make sure that they are indeed explainable. That either humans can explain their reasoning, or, which is in my opinion more likely, that the AIs are going to be able to do so, using specialized modules applied to their own behavior.
We could look at the history of Artificial Intelligence as the history of human civilization itself. When Aristotle formulated Symbolic Logic, or when, in a Jewish folklore, we look at the mythological goal created from nonliving matter. We are really addressing the deep questions of intelligence, and if it is possible for humans and human technology to create the hardware and the software that can exhibit intelligence.
This is a very colorful history that includes also fakes, or artificial Artificial Intelligence, like The Turk, a mechanical contraption that toured European courts in the late 18th century, pretending to show an automaton able to play Chess, even though it was later revealed to be a hoax that hid a midget, that was the human actually playing Chess.
Today we have an echo of this century-old collaboration between humans and machines in the name of the Online Service provided by Amazon: Amazon Mechanical Turk, which allows developers to transparently assign tasks to armies of human operators, for whom those tasks are easy, even though for computers are difficult.
The fact that advanced ideas can be formulated while their implementation is impossible, due to the limitations of the technological environment within which they have been born, is illustrated by the mechanical calculator projects of Charles Babbage in the 19th century, sponsored by Ada Lovelace. Babbage designed machines that the metallurgical craftsmanship of the time was unable to implement. In the late 20th century, the Science Museum of London realized some of these machines, proving that Babbage's ideas were sound. But of course, today's computers are not based on mechanical calculation.
In the 20s, the first modern representation of mechanical man, what we today call Robots, appeared in Karel Čapek's play: R.U.R. (Rossum's Universal Robots). And this was as much an example of social criticism, anticipating the challenges that we are today seeing playing out in society, as a reflection on the industrialization of the times.
When electromechanical computers, the first digital computers, started to appear in the late 30s of the 20th Century, designed by Search Results Web results John von Neumann, a Hungarian mathematician who, together with and many others, worked on the Manhattan Project. The possibility of applying these computers and their much more powerful successors to design Intelligent Systems was immediately apparent.
Over the decades, two main approaches have characterized AI.
This approach was implemented in specialized hardware running AI programming languages, such as the Lisp machines produced by symbolics in the 80s. And funding was plentiful for a time, and there were many enthusiastic reports popularized in well attended conferences. However, the promise of those AI systems could not be realized. The so-called AI winter set in even though certain results could be achieved with brute force approaches, such as the seminal victory of IBM's Deep Blue computer that beat Garry Kasparov in 1997, which was the first victory of a computer system against a reigning world champion under regular time controls.
Already, at the time, the alternative bottom up approaches, implementing a connectionist model, were available. Neural networks were being tested and experimented, but certain mathematical breakthroughs were necessary. The original formulation of Neural Networks, by Yann LeCun in 1989, had to wait until 2012 when a particular implementation of a Neural Network achieved superior superhuman performance in image recognition at the imagenet challenge, and represented the start of the modern era of Artificial Intelligence.
It is necessary to be aware of trends, and it is absolutely reasonable to ask oneself: it is a question of hype, or something really important is going on? Is today different, for example, than the 80s when AI appeared to be on the verge of delivering on extremely ambitious promises? But then, it couldn't.
Certainly, a lot has changed since then. The most important trend that those of you who attended the Unit on the Power Of Exponential Change know is that successive generations of technology solutions design an exponential that is ongoing. We call this Moore's Law in computing, and the computers that we have 40 years later are much more powerful than the computers we had in the 80s.
However, hardware is just one component. And it would be quiet useless without software. We are accustomed that new platforms need an ecosystem of developers that build exciting new solutions on that specific platform. We are accustomed to the rich set of offerings available in the various app stores for our smartphones, for example. And that is also the case for artificial intelligence.
We have ever more sophisticated algorithms that are able to deliver the type of results that would have appeared magical, a few decades ago.
On top of these two traditional components of Information Systems, hardware and software, the current approach in Artificial Intelligence necessarily relies on also a third one: very large amounts of data. The data that our global infrastructure of information systems is able to collect. This kind of infrastructure was just not available 10, 20, 30, 40 years ago. And even if we did realize that vast amounts of data were needed in order to train the software systems running on a given hardware, our ability to collect this data was severely limited.
However, as it is always the case, the proof is in the pudding. In particular, it is the test of the markets that must confirm that today's applications of Artificial Intelligence are valuable, either because consumers are ready to pay for them, or enterprises are able to compete more favorably, compared to those companies that do not employ AI in their tool set of scalable solutions.
More than half a century ago in 1965, based on just three data points, Gordon Moore formulated a prediction that is now bearing his name. We call it Moore's Law, and it is a self-fulfilling prophecy that, on average, every couple of years, we will be able to double the number of transistors on a given area for an integrated circuit. Integrated circuits, made of transistors, are the heart of the Central Processing Units (the CPUs) used in our computers and Moore's Law has proven to be valid.
For now, over 50 years, hundreds of teams, and thousands and tens of thousands of engineers are competing all over the world, in order to overcome whatever bottleneck is running the risk of slowing down the pace of innovation. And since we are in a globally interconnected world, it is sufficient for one of them to find a solution, and then patented and license it to everybody else, so that the entire field can proceed almost in lockstep.
The features of our hardware components have become extremely small. So small that they are now getting close to the limits where we can still apply classical physics and ignore most of the quantum effects that dominate at even smaller scales.
In another Unit of this series, we actually look at Quantum Computing, which is going to be the next revolution in the field of hardware. The CPUs, that we use in our general purpose computers are designed to be able to support the execution of a very wide variety of tasks.
Alongside CPUs, a few years ago, a new specialized set of chips appeared: the so called Graphical Processing Units (GPUs). When you play a video game, or maybe you see your friends or children too, at play with video games, the fantastically detailed ever more photorealistic scenes, that you can observe, rendered in real time by the computer are based on game engines that take advantage of the processing capabilities of powerful GPUs. Most of the rendering tasks are the calculation of the intensity and color value of big cells, as they are represented on the screen.
But behind the scenes, the three dimensional structure of the synthetic image looks at volumes, and those volumes are broken down in a very large number of triangles, and it is the color and intensity of these triangles that must be calculated in order to render the scene. It turns out that in order to make these calculations, an architecture that is able to execute many of them in parallel and independently from each other, is the optimum optimal approach rather than employing an architecture, that characterizes the CPUs, where certain data sets are privileged respect to others and can become a bottleneck by themselves.
The architecture that enables these parallel execution is the one implemented by GPUs. And the modern machine learning algorithms, implemented in AI systems, exhibit the same feature of being able to execute a very large number of elementary mathematical calculations in parallel independently from each other in progressively or so of abstraction. And as a consequence, GPUs are today the premier high performance hardware used for training and running AI applications. The journey of specialization is not over.
A large number of producers, both existing and new, are busy at work in designing and producing specialized AI chips or TPUs (Tensor Processing Units). These are even more fiercely oriented towards the high speed execution of Neural Networks, and the training of Neural Networks.
Just as is the case with data centers, but also with individual computers, it is natural and important to also look at issues of energy consumption. This is especially acute in the case of Artificial Intelligence. The human brain consumes about 20 Watts, equivalent to a dim incandescent light bulb. But it's computing ability is unparallaled, billions of times still more powerful than our AI systems. There are important and specific projects that are aiming at improving the energy efficency of hardware in general, and of specialized AI hardware in particular.
Data is the natural unit of computation. We divide it into Bits and Bytes and Kilobytes and Megabytes and Gigabytes, and we have come to be accustomed to our ability to collect and store and process it. We have been doing this since we started to have computers, and 20 or 30 years ago, we would indeed be talking about Data Processing and Electronic Data Processing (EDP) used to be a job description, and that would be the way that we would think about computers.
The concept of Big Data is more recent, and it describes Our personal computers are not enough to get ahold of the data. The sources of big data can be numerous: it can be derived from retail chains cashier data or generated by the consumers attached to their identities through the loyalty cards. Information about employees in databases, tens of thousands or hundreds of thousands of people with their salaries and hours worked and vacation time and so on. Stock market data and the trading of stocks in high frequency trading platforms where in milliseconds large numbers of trades can be executed in various stock exchanges. Health databases that can hold information about millions or tens of millions of individuals. Social media platforms that today have hundreds of millions or billions of members, each of whom every day will upload photos and videos and post updates and comments on those platforms, literally, tens, or hundreds of billions of new pieces of data every day on each of
them. And what is the largest source of Big Data, the nodes of networks of Internet Of Things, that are sensors that capture vast amounts of data where the source is not represented by people anymore.
The simplest example of this can be the communications of our mobile phones with the cell towers, in a mobile network, or the computers on wheels (which we call cars) that, whether self driving or not, are now generating a very large number of data with dozens of computers in each car, measuring and communicating in order to make the car function.
Now: your computer today may have a few tens or hundreds of gigabytes of storage with maybe the largest of them, a Terabyte or two. But when we are talking about a storage of volumes of Big Data we are talking about hundreds of Terabytes or thousands of Terabytes, called Petabytes, and the clusters of machines that are needed in order to store and process this data are in warehouses that we call the Data Centers.
We don't only talk about the volume of Big Data, but also its velocity, how fast the data is generated and how fast we should be able to both capture it, store it, and process it. What kind of variety the data will have, and within that variety, is it homogeneous or do I need to be able to pre-process it so that it can be adequately compared and updated.
What is the trustworthiness of the data, its veracity? In order to be able to know, whether we should acquire it or discard it or put a given weight on it, as we go about processing it. And finally, and most importantly, the value of the data. What are we going to be able to do with it? Act on it, rather than just leave it there. And then AI applications, that are based on machine learning, where Big Data is essential, because millions or tens of millions of instances are going to be needed to train a neural network for the various tasks that we are going to see in its applications.
Algorithms are the programs that computers use in order to process the data for some useful result. During the past decades that we have had computers, there have been different schools of programming that would use approaches for dealing with the data. These schools would be born out of the architecture of the computers and the specific hardware that was available at the time.
At the very beginning, computers were not interactive: they would be programmed through punch cards that had to be fed to the computers very carefully, the data would be represented also on stacks of punch cards, and the result would be batch processing. When computers became interactive, and then personal computer languages and the algorithms that resulted became interactive as well, it was possible to experiment much more, to make small programs and then combine those small programs in libraries and then attack bigger problems using those libraries.
Typically, the programs would be broken down in procedures that generated these small results and as a consequence, the programming languages would be called Procedural Languages. The possibility of having alternative approaches was clear, and these have been attempted from time to time. But a combination of factors, insufficiently powerful hardware, or the lack of a big enough quantity of data, thwarted the emergence and the affirmation of bottom-up approaches that characterize Neural Networks today.
Neural Networks are not procedural in their nature, they are not the result of the combination of small programs in order to get to the larger result. They apply elementary mathematical operations to a set of inputs in order to get the output. The set of inputs can represent an image or an audio file or any other kind of data. Between the input and the output there are intermediate layers that are also forming respectively the input of the successive layer, which is in turn the input of the layer after that. Modern Neural Networks can have many layers of progressive calculations, and tens or even hundreds of billions of parameters, corresponding to the connections or synapses in biological neural networks.
A fundamental concept for Neural Networks is back propagation, which applies the results of a certain calculation down the line to adjust the weights of the connections and the results of the previous calculations.
Today we have many different types of Neural Networks: Deep Learning Neural Networks, Convolutional Neural Networks, Generative Adversarial Networks and so on.
These serve different purposes in terms of classification of images, for example, or generation of images instead. And their power is increasing: they are not only ever more sophisticated but they are part of ever more sophisticated toolsets in progressive layers of abstraction, where programmers become extremely efficient and they can achieve great results, not without effort, but with an efficacy that is surprising even to the programmers themselves.
The complexity of these Neural Networks starts to be comparable to the volume of data that the Neural Networks act on themselves. The latest version of a Neural Network called GPT-3 generates text as its output. It now contains over 100 billion parameters, 100 times as much as the previous version published a couple of years ago. And we can expect that future Neural Networks are going to contain hundreds or thousands of times more parameters than the current leading ones as well.
The way that the Neural Networks work is through an initial setup, and then an important training phase where the Neural Network is exposed to a large number of examples. And it is based on these examples that it is able to generate the weights of the connections in order to achieve the desired output (for example, the classification of an image whether it represents a cat or a dog).
However, the classification can be of anything, not only of images. We are applying Neural Networks to an ever larger number of problems, to the point where now we are actually using Neural Networks to write computer code, and in a certain sense, this closes the circle.
We have used hardware that is getting very complex, in order to generate Neural Networks and train them, and now Neural Networks are writing code, and this code can be used in order to improve the hardware that we are running.
Over the course of the past several decades we have started to become accustomed to the increasing rate of change surrounding us, due to improving technologies. In various industries we have been able to identify and give a name to this: in electronics, we call it Moore's Law, in the field of energy the improvement in the cost per watt of photovoltaic cells is called Swanson's Law. And it expresses a similar exponential function.
To some it has been a big surprise that it appears that in Artificial Intelligence, a different kind of outcome is the result of similar processes.
Stanford University published a report that highlights how the power of AI was expected to increase 30 fold between 2012 and 2020, over the course of eight years. Instead, based on the data they collected, what has been observed is that this increase has been 300,000 fold, ten thousand times larger than the expected improvement.
Together with additional commentary by Open AI, a non-profit that received a billion dollars of funding from Microsoft, which is also now building for them one of the world's most powerful specialized AI computers, Stanford University concluded that there are two separate eras in the history of Artificial Intelligence. Up to 2012 when its power would double according to Moore's Law, every couple of years, approximately, and since then when these doublings would be every three, four months.
But instead of this contrived breaking point that cannot be attributed to anything specific, and it cannot extend generally to the various applications of Artificial Intelligence that benefit from it, what I proposed in 2019 is that we are in front of a new paradigm based not on a constant acceleration but on an increasing rate of acceleration. I call a Jolting Technology, a technology whose power changes based on an increasing rate of acceleration. In mathematics, the variable rate of acceleration is called jolting, hence the name of the paradigm.
A consequence of this novel interpretation is that we can expect acceleration to further increase. Indeed the next technology that is going to contribute to it, Quantum Computing, is already on the horizon, and combining Artificial intelligence with Quantum Computing is going to be one of the most powerful forces on the planet over the course of the next few decades.
There are many applications of Artificial Intelligence, both in the consumer and the enterprise sectors. It is easy to forget that activities and functionalities that are now part of our daily life, we thought were almost magical just a few years ago.
When 20 or 30 years ago, we would go on holiday and shoot, maybe 2-3 rolls of film, our ability to usefully classify the images, as years went by was diminishing: we would keep them in a box and that's it. Today, with the help of AI, not a few dozen, but literally tens of thousands of images, or more, that we shoot over the course of a year can be automatically labeled and classified, not only by date and place, but also by their content across thousands of different categories. I have more than 200,000 photos in my online storage and I am able, for example, to ask the storage system that is Google Photos in this case, to pull out the photos of people smiling on the beach at sunset, and the photos representing my children, while we are on holiday, will show up.
On Facebook this kind of classification is used to give a text description that is then spoken out through voice synthesis of a photo, so that blind people can use the platform as well. And of course, face identification is what we use daily on the iPhone when we are unlocking it, and the phone recognizes unfailingly our face, and does not unlock with the face of anybody else.
30 years ago, speech recognition would only be used by people who were paralyzed, and the effort of training and then painstakingly adapting your own speech to the ability of the system would be justified only if no alternative was possible. But today, whether it is for querying search engines or virtual agents, conversing with a chatbot, or because we are dictating a WhatsApp message, instead of recording an audio, we can very easily take advantage of high performance speech recognition systems that require no training and they recognize what we say with a very high quality.
Recommendation engines like Netflix or Amazon Music or Spotify are able to learn from our preferences and create an experience, with an increasing level of expectation for positive outcomes: movies that we like, songs that we appreciate and want to listen to.
In video games, the level of sophistication of NPCs (Non Playing Characters), participants in the game driven by AI engines, have an increasing level of sophistication, both in their tactics and their strategies in a way that they are able to plan how to kill us in the game.
More and more enterprises realize the power of artificial intelligence and adopted in various parts of their operations. Similarly, to how decades ago, progressively, they would adopt digitization to the point where today it is strange and unexpected to find any corporation that does not use computers and digital processes in order to stay competitive. We can expect that in the future, there will be practically a similarly universal use of artificial intelligence platforms, techniques, and applications, of course, not all corporations are able or willing to develop these from scratch, which is understandable. It is the responsibility of the entire ecosystem of hardware, software and application providers to make sure that there are appealing vertically specialized applications that can be deployed and adapted to the specific needs of enterprises worldwide. We will see the same development as we have seen in the past with data processing and computer systems themselves. The Harvard
Business Review published an interesting article that then became also a book entitled "IT Doesn't matter", "it doesn't matter". And the article implied not that it wouldn't be meaningful and necessary to adopt digital processes and platforms in a business. The opposite, that it would be compulsory, it would be impossible for a business to exist without. What it implied that this becoming universal was something that everybody would have to do, and that doing it would be expected, but that it would not by itself guarantee a differential sustainable, competitive advantage. One of the examples that we can look at is from Google. And you could say that it is unfair. Google is one of the largest and most important companies in the world in general, but also Google is the pioneer in so many artificial intelligence applications. However, this particular application is significant because it is in an unexpected area. DeepMind is a company that Google acquired several years ago, based in
London. They are famous because of a particular neural network that they designed that was able to beat for the first time a world champion of the game of Go. But DeepMind has other important applications as well in the field of healthcare and elsewhere. This particular example is in the optimization of data center configurations, and as we know, Google has many data centers, and these consume a lot of electricity. Google is also accused of being wasteful. And they had to defend themselves, proving that if you went to the library every time you had to ask a question to find the answer, rather than being able to use Google to find the answer, it would have consumed much more energy instead. But still,
they want to be profitable. So the less energy they use, the better it is. And by combining millions of data points, hundreds of different parameters DeepMind was able to to design a system that was deployed and is used at Google to reduce the energy used in cooling the data centers by 40%. This system not only analyzed unexpected parameters, like the amount of data and computation that is in a given area of the data center or which doors or windows are kept open or closed and how and when, but is also then connected to actuators so that the system can not only predict what should be done, but go ahead and do it as well in order to achieve the expected beneficial outcomes. without the need of additional human intervention. There are all kinds of sectors that are being impacted by artificial intelligence. And these are not only parts of traditional quantitative areas, like the example that I gave before. For example, human resources is starting to use AI in order to analyze the
curricula that are submitted to the companies that want to hire certain talent in order to display those candidates that are most likely to be hired to the human interviewers. This is already very interesting because on one hand, you see that this company is promoting its services by claiming that using their system will diminish the eventual bias in hiring. But of course, it is then natural to ask oneself what are the possibilities that the candidates that are on top would be the best ones in a neutral fashion, as they are then interviewed by humans.
The interaction of humans and machines is of fundamental importance. There are very few areas where this is of more vital importance than in operating airplanes. The European Union Aviation Safety Agency is incorporating more and more AI components in a complex system where human decisions have literally life and death consequences. There are systems like high frequency trading and quantitative hedge funds. Were not applying AI approaches is is not possible anymore to claim that you are beating the market, which is what every fund is attempting to do without resorting to the kinds of deep learning approaches that we have been looking at becomes less and less believable. In turn, given that everybody uses those approaches, just because you have adopted them, it doesn't mean that you will be able to beat the market.
The rapidly developing field of Artificial Intelligence takes advantage of the effort of thousands of researchers around the world, who then publish their results openly in scientific papers, which these days are very rich in order to make sure that the results are reproducible. They not only contain the textual description of the results but also code and data, and the setups that would correspond to the laboratory notebooks in other fields of science, so that the people reading the research are able to go ahead and see for themselves.
OpenAI, a nonprofit originally funded by Elon Musk which now received very large support exceeding a billion dollars by also the likes of Microsoft, two years ago published GPT-2, a text generation network, based on little more than a billion parameters, followed a few months ago by GPT-3, which is based on over 100 billion parameters. Write with transformer, and Talk to transformer, are two web based interfaces, where a custom prompt allows anybody to see the results of the text generated by the system. I invite you to test it yourself, with something like "A modern law firm must understand Artificial Intelligence…”, and then hit "complete text” and see the result. Every time you hit the button, a different series of paragraphs is going to be generated with surprising results. I'm looking forward to hear from you what your experience has been.
Similarly to text generation, the ability to generate images, especially human faces, is both exciting and concerning. On the URL "This person does not exist” you can hit the button, every time, and you see a computer generated face of somebody who does not exist. Actually there are already applications commercially available of generated bot photos: you can design the models for your website, for your mobile app, for many other kinds of applications based on the sex, age, ethnicity, eye color, hair color, hair length, emotion expressed. You realize if we combine these, we have the ability to synthesize, at an ever increasing fidelity, information that could deceive others.
The point is that as we rely on our digital world for decisions about our day to day life, but also about corporate decisions or how we vote on the next political elections, our ability to detect fakes is of fundamental importance. Facebook recently organized a contest in order to accelerate the development of tools, themselves AI based of course, that are able to identify and highlight fakes in order for them to be either blocked or non blocked, but so that we have the knowledge of the reliability of the digital data that is surrounding us.
DeepMind deserves a special mention. In 2016 they released a program called AlphaGo, literally 20-30 years earlier than experts would have expected, that was able to beat then champion Lee Sedol in the game of Go. A year later in 2017 they released AlphaGo Zero that was much more efficient and faster in achieving the level of superhuman performance than AlphaGo has been before. It was able to learn faster, better, and it was able to do that without relying on the library of human plays that was the basis of the learning for AlphaGo previously. So it was able to learn to learn, while AlphaGo was able to learn. Very differently from the machine Deep Blue, designed by IBM in the 90s, which was taught to play chess. AlphaGo learned, AlphaGo zero learned to learn. And then Alpha Zero, a year later still in 2019, which is able to learn to learn to learn. Because Alpha Zero is able, without any library of knowledge, without anybody teaching it how to play, just with the basic rules to excel
at superhuman levels on various types of games: Go, Chess, Shogi.
Now, if these games that I mentioned are played individually, AI systems now are also able to collaborate and discover strategies for collaboration, so that they can work together with other AI agents or to manipulate their environment in order to achieve a certain goal. In this example, either as attackers, or as defenders in a labyrinth, using the various pieces of digital furniture available, to respectively attack or defend themselves ever better. And importantly, again, both the ways that these tools can be used, and the ways that the various agents can collaborate with each other, are not explained, are not taught: the agents, learn by themselves how to do that.
The opportunities for any organization to learn about artificial intelligence are boundless. It is almost an obligation. The question is can you afford not to learn about AI. How long can you postpone until it becomes impossible to catch up with other competitors who started sooner. And Kevin Kelly, one of the founders of Wired Magazine. Puts it very interestingly. He says: "Artificial intelligence is like electricity at the beginning of the 20th century." You can redesign any industrial process from using steam power to using electricity, and the gains in efficiency flexibility reliability are huge. As a consequence, you will be able to produce more to produce better. And we are seeing that today, steam is still used in some niche applications but electricity is what dominates our lives. So he says today: It is the same with artificial intelligence. You must transform your business processes adding AI to it. He quips about startups, saying that the business plans of the next 10,000
startups are easy to forecast, take x, and add AI. Of course, the world is very very complex, and we cannot really understand what the consequences of this process are going to be. We could call it with a neologism, once again, attributed to Kevin Kelly, the process of cognification. Just as biological evolution created the biosphere, through AI we are now creating a noosphere. There are many resources available in order for the process to be started by anybody any organization, according to their availability, according to the talent they have or they can acquire. And all these resources can be accessed of course today, through the internet. It is just an internet search away.
The larger organizations like the European Union or China or the United States are all investing large sums of money in order to boost the use of AI in their respective territories. Concretely, you can start by checking out the various platforms that Google, Facebook, IBM, Microsoft, and every other large technology company makes available to developers that they want to keep engaged in their ecosystem. It is their interest to make sure that these advanced tools are available at almost a zero cost, with an increased ease of use, so that the learning curve is not very high, and that results can be achieved as rapidly as possible. Very often, these free resources come with additional credits for running the applications developed on their respective cloud platforms. Of course, all this free comes at the cost, which you must be aware of, of acquiring sometimes specialized knowledge, so that you become in a certain ways captive to that environment, to that ecosystem. But if you are smart
about it, you can often distinguish between the kind of knowledge that is applicable universally, and the kind of knowledge that is specialized for the platform in mind. The scarcest resource today, really, is the human talent. That is where competition is really heating up, for experts in artificial intelligence that have had years of hands on experience with machine learning, and data science, and who are able to understand, analyze configure, run and maintain and improve valuable AI applications. You should nurture relationships with universities, research centers, individuals at every possible step, so that you understand the availability of talent, and you can acquire the type of talent, either on a temporary or a permanent basis that you need in order to incorporate AI in your processes as well.
There are a lot of challenges to implement effective, broad and deep AI solutions. Do you have enough data? Are you able to collect enough data? Do you have the right kind of human talent? Are you able to communicate business objectives and then translate them into the platforms that enable you to implement AI applications, effectively? And of course, is your organization ready? Are you going to be able to embrace and deploy the application widely enough, or is it going to be seen as a weird exotic experiment, only by the majority of your organization? But even when all of these are overcome, there are some issues that are still intrinsic to today's AI approaches. Machine learning, and neural networks are today's leading approach to artificial intelligence. One of their intrinsic characteristics is their unexplainability, and as a consequence, incomprehensibility, even by those who are developing and fine tuning and nurturing them. In a many layer Deep Learning Network it is not at
all clear, even to the most experienced specialist, what the role of a single parameter is or exactly how changing the parameters is going to let the system arrive to a given decision. In certain jurisdictions this is a real problem, because the law requires that machine driven decisions facing the public must be explainable. There is a strong effort in the direction of creating explainable AI. Then of course, the issue that even when we rely on the AI systems and we start to trust them, as of today, they are very specialized. They are trained in a given set of instances, and their decision making ability falls into that set of instances. In 1983, we wouldn't call it an AI, but Stanislav Petrov, was in front of an automated system that told him that the Soviet Union was under attack by missiles launched from the United States. He decided to disobey the written procedures that required him not to judge the output of the system, and to alert his superiors of the attack, which would have
almost certainly triggered world war three. He disobeyed, and he was right. Because it turned out that the system was flawed and was triggered by a certain type of cloud formation that the designers didn't realize could trigger an alarm, setting off when no enemy missile launches were in process. The degree of autonomy of an AI system is going to be an open question for a long time, especially in military applications where lethal autonomous weapons are around the corner. And if you want, as I did, you can become a signatory to the open letter that asks for the band on a worldwide basis of lethal autonomous weapons.
Are corporations lethal autonomous weapons? Are they able to go beyond the absolute requirement of maximizing shareholder value? Or will they force their leadership to do things that posterity will judge negatively, or even as catastrophically bad?
We know that AI systems can be extremely biased already. There are systems deployed, that are either racially biased or in other ways, express flaws in the data models. These may have been already present previously, but AI makes them even more evidently intolerable. The challenge of overseeing and improving AI systems is non trivial. It is not easy to design, implement run and update ethics boards that possibly or themselves flawed, just as the AI systems that they have to oversee.
Technology enabled our societies to evolve. We are not better people than the Romans, just because we have been able to outlaw slavery. We are in possession of better technology that makes slavery not only morally unacceptable but uneconomic. We can realize that our morality is an expression of our technological abilities, if we think about the fact that child labor has been similarly outlawed just 100 or 150 years ago. In mines in Britain there were children as young as 8 or 10 years old, excavating coal for 10, 12, 14 hours a day, and that the admired entrepreneurs of those times were lobbying, writing editorials in The Times of London, expressing their horror to the forthcoming legislation that would prohibit child labor, maintaining that they couldn't compete and couldn't support their businesses under these new circumstances. Today we have a certain kind of social contract that says, as long as you are employed, you are a useful member of society. However, an ever decreasing
number of people work, and an ever decreasing hours are worked by those people. Can we really accept that the unemployed are useless and discard them from society? Are we sure that we can answer all our questions disregarding the available pool of human talent, as if it were excessive cost to be optimized away? AI is seen by many as a fundamental threat to human employability, but we don't have to see it that way. More and more functions can be executed by more and more people, thanks to the superpowers that AI gives them. That is, at least partially the answer to the challenge of technological unemployment. We are going to build new businesses and new business models. What we have to watch out, is that incumbents may have the power of suborning and of capturing the regulatory powers, that should enable open competition of these business models that are supercharged by AI, keeping society attached to ways of organizing itself and ways of organizing labor, that don't belong to the 21st
century, any more than slavery belongs or child labor to it.
The AI systems that we have illustrated in the previous modules are all specialized intelligences. They are able to exhibit superhuman performance in their particular field. The endeavor of many who are trying to create artificial intelligence that can apply its smarts its ability to learn, to address and analyze any possible problem set, is called the challenge of artificial general intelligence. There are people who believe that AGI is impossible. Those people regard human intelligence as special, unique, and then down with some secret, possibly metaphysical component, that we are never going to be able to analyze, emulate, implement in any different way than we already do through biological reproduction. However, there are those who are hard at work in order to realize the objectives of artificial general intelligence. Once this is achieved, these artificial intelligences will be able to analyze a problem, any problem to organize the resources, including their internal programming,
needed to solve the problem and then move on. An open question is, if they are going to be endowed with goal setting of their own will they have the ability and the desire of picking their next goal, their next objective. Because if they are then we are in front of what is called the technological singularity. A point in time when self modifying artificial intelligence is potentially going to transform the world radically, to the point of being maybe unrecognizable, at least by people who are not themselves augmented in their ability to perceive and to interpret the world by AI systems.
Hollywood likes to represent dystopic futures, where humans and machines are opposed to each other. And these apocalyptic visions are entertaining. We have to ask ourselves, what are the alternatives. They make us think of desirable futures that we can design and then implement, so that these apocalyptic, entertaining movies become self defeating prophecies. A good example is the self driving car. There have been hundreds of companies for decades, investing billions of dollars pursuing the goal of an automobile that is able to drive itself.
there are glimpses of this becoming a possibility, concretely, over the course of the next few years. There are still important technological and regulatory hurdles to overcome. But when they are resolved we will be living in a world where it is expected that 90% and more of the car accidents that kill today more than a million people every year on the planet will be avoided. It is a wonderful example where we can realize that humans and machines have a common enemy. We are allied with smart machines and our enemy are the dumb machines instead. We have to understand what kind of future we want for ourselves on this planet, as well as, as we keep exploring, for the rest of the universe, in an age where humanity will pursue its dreams in alliance with millions of kinds of artificial intelligences.
The Quantum World
The world around us is something we recognize, interpret and act on, based on the phenomena that correspond to the scales that our human senses can usefully perceive. These scales can span some orders of magnitude but not more than a few. If we are looking at the human scale of a few meters in length, then the world that we intuitively understand is from a few millimeters to a few kilometers, for example, and the same for mass, duration, and other dimensions. Outside of those our intuitions and our instincts don't serve us well, in order to understand how to create useful theories and how to act on those theories. But the world is still made of phenomena that we can analyze, describe and understand, even if they are radically different from our common sense expectations.
The quantum phenomena manifest themselves outside of these orders of magnitude at the smallest scales of the world. The way atoms and the constituents of the atoms behave, the way that photons and electrons interact, the way that molecules form: this is the realm where quantum mechanics rules. The theoretical explanation of the photoelectric effect by Einstein in 1905 can be somewhat arbitrarily taken as the beginning of the modern science of quantum mechanics. There were many, many other physicists of course that worked to put together this branch of science. The counter intuitive phenomena mentioned before include things like wave particle duality where it is not reasonable to say that the electron is a wave or that the electron is a particle, because it turns out that the electron is kind of both. We have experiments that prove this dual nature of the electron or actually of any other particle, and as a consequence of ourselves since we are made up of those particles.
Or the Heisenberg uncertainty principle that says that we cannot measure at an arbitrary precision simultaneously the velocity vector and the position of elementary particles such as an the electron, but there are minimal bounds to this. If we measure the velocity vector, we will know how fast and in what direction the electron is going, but we will not know where it is. On the other hand, if we measure very precisely where it is in a given moment in time, we will not be able to tell where it is going to end up in the subsequent moments in time.
It is very important to understand that these behaviors and these principles are not due to an insufficiently detailed theoretical framework and that we just need to work for another hundred years or 200 years in order to create a better version of quantum mechanics that will eliminate them. Actually there have been experimental results confirming theoretical predictions that prove from an epistemological point of view that quantum mechanics is correct in its formulation regardless of how counter intuitive it is, regardless of how classical logic is unable to parse it.
Some of the leading figures in physics disliked the fact that the universe is like this, including Einstein himself, who was looking for hidden variables that could explain away this set of phenomena, but he couldn't find them because they were not there. Today it is proven that they are not there.
Quantum electrodynamics, for example, which is the relativistic version of quantum mechanics applied to the behavior of electrons at high speeds, is one of the most precise scientific theories ever tested to 10 parts in a billion. When you build a house or even a skyscraper, when you create very, very precise instruments for example, for brain surgery, the matching between your designs and the execution of those designs will never reach that kind of precision. We have nothing in this world that is as precisely executed as quantum electrodynamics is able to make predictions, that can be tested by experiments.
Keeping Quantum At Bay
Quantum phenomena are pretty well understood and quantum mechanics is an incredibly successful set of theories, and matching experiments. However the nature of applications, the challenges of engineering solutions that incorporate the advanced theories can take some time.
If the changes needed to incorporate them are radical, the opposite can also be true, where we try to isolate and practically ignore the quantum effects for as long as possible.
In the design of electronic circuits, we were able to achieve incredible improvements by shrinking the size of the components. We were allowed to keep ignoring for the most part the quantum effects. The people who design the circuits and the software that helps them design these circuits don't take into account the behavior on the atomic scale of elementary particles such as the electrons that move in the electronic circuits.
They didn't take it into account, except at the foundries where people had to turn the designs into effective physical instantiations. Where the chemical properties of the components matter, and the wavelengths of the light, masking or revealing parts of the circuits allows the etching of the circuits at small sizes, start to face quantum effects.
So the foundries were already making certain adjustments that were needed because of the quantum phenomena. Today's generations of electronic circuits have feature sizes of seven nanometers and the next couple of generations will bring us to five, and then to three nanometers. At those scales the quantum effects will be harder and harder to ignore. And the design of classical computers will have to be updated, with updated processes and updated software and updated thinking, that actually starts to incorporate quantum behavior of electrons and circuits rather than ignoring them as it was possible until today.
Will it make sense to try to replicate the behavior of classical computers with the ecosystem of solutions that will be needed at that point? That would be a bit like keeping to design cars that look like horse drawn carriages, just without the horse. The alternative is to fully embrace what is new, and to create a completely new computer architecture.
We already have many applications that are based on our understanding of quantum phenomena that are widely deployed. Let me give a few examples.
We all have experience with lasers, they are in widespread consumer and industrial applications. Probably the most common use of lasers is as barcode scanners, both in retail as well as in general warehouse applications. They are at the basis of something as humble as a presentation pointer or in the DVD players or compact disc players and in very many industrial applications such as for cutting, welding, marking, cleaning. Fundamental for the computer +-industry in photolithography in fabrication of electronic components, and in the highest speed, fiber optic connections, either between our homes and businesses and the internet service providers or as the communications backbone, including the transatlantic fiber optic cable bundles. There are many applications of lasers in the medical field as well in cosmetic surgery for removal of scars or hair, in surgery, as well as in general as a precision scalpel.
Laser is an acronym and stands for light amplification by stimulated emission of radiation. It was first built in the 60s. The characteristics that we all recognize of lasers of being composed of light of a single frequency, a single color, and forming a ray that is very narrow and which doesn't diverge but stays narrow over a great distances, are all due to the quantum nature of its origin.
An important future application of lasers that is already being widely tested and whose price is rapidly diminishing is the LIDAR, which stands for light based radar or light detection and ranging. It is a 3d laser scanning technique to create very high resolution maps in applications like archaeology, geography, forestry, but also and this is what is leading to its widest current deployment, in the control and navigation of autonomous cars. A few years ago, a LIDAR system would be Very big and heavy as well as expensive at hundreds of thousands of dollars each. But today it is getting miniaturised and the cost is reduced radically. The latest model of iPad Pro includes a LIDAR scanner that is going to be used for advanced, high quality augmented reality applications.
The second example of quantum technologies in widespread use with which we are all directly familiar is GPS navigation. GPS stands for global positioning system and it is a satellite based radio navigation system developed and deployed by the US military in the ‘70s. Now there are also complementary systems deployed by Europe called Galileo, by Russia called GLONASS, and by China called BeiDou. Originally developed by the military, GPS was authorized in the 80s for civilian use as well. The receivers for positioning have become ever cheaper and miniaturized, and now are included in all of the mobile phones. The fundamental contribution of quantum technology to the GPS system is the incorporation of atomic clocks that have extremely high precision, kept synchronized among the satellites and earth based stations to an accuracy of 20-30 nanoseconds. Before quantum technology, it wouldn't have been possible to create and maintain such a system. It is interesting to note that also the
other physics revolution of the 20th century, the theory of relativity was incorporated in the GPS system. It allows to take into account time dilation due to both the speed of the satellites and the variation of the gravitational field at the higher orbits, with respect to the surface of Earth. Every time your radio or your computer or your clock are able to synchronize themselves with remote time servers, these are based on atomic clocks.
Distances can also be measured very precisely using quantum technology. Very famously, gravitational waves were recently discovered using interferometry based on quantum technologies.
As we can see, there are many applications already of quantum technologies that have been achieved translating theoretical and experimental scientific understanding through extensive research and development efforts into engineering implementations over the course of the past 100 years.
However, the most disruptive applications of quantum technology are still ahead of us.
In parallel to where people only take into account what happens in the quantum realm only when they are forced to, for the past 40, 50 years, there has been an initially theoretical and then for the past 20 years a little more practical approach to build from the bottom up computers that take advantage directly of quantum phenomena. In particular to use superposition and entanglement in the calculation process itself.
Superposition means that the parameters that we are measuring in a quantum system give out a particular value, but the system itself is able to hold simultaneously a very large number of values of these parameters. Entanglement means that the ability to combine and hold the values for the parameters that we need extend to the entire quantum system under certain conditions. Rather than being frightened by them and trying to hide them under the carpet, pretending that they don't exist and hoping and crossing our fingers that our circuits and and tools are not going have to take them into account, we can say "Wow, this is incredible, how can we take advantage of a new way of thinking about computers that can leverage this?” Then we have a truly jolting technology in our hands.
The term jolting technology refers to those technologies that not don't merely accelerate, but whose acceleration increases in time. The jolt being the first derivative of acceleration. It is an increasing rate of acceleration. The reason why quantum technologies and quantum computing natively employing superposition and entanglement are jolting technologies is because they are super exponential. They take advantage of our ability to design circuits that is already increasing exponentially. And on top of that, at any given number of components, they are exponentially faster than traditional computers.. The individual components where the programs that are executed take advantage of a linear increase add to the increase of the number of components. So quantum technology is an inherently jolting technology. We have already seen a breathtaking improvement in the power of our computers over the course of the decades in our mainframes workstations, personal computers, mobile phones,
internet of things sensor nodes. Quantum computers are going to go way beyond this.
But they are extremely delicate beasts, at least as we know them today. To keep a quantum computer with current architectures holding its superposition and entanglement, one of the particular approaches is to make these circuits superconducting where the resistance of conductors disappears and the electrons can interact in the circuits much better and their interactions can be held in their desired quantum states for longer periods of time. Superconductors are available in many different types and can have many different features. But the common characteristic is that they require extremely cold temperatures. Temperature is a statistical representation of the vibration of atoms. From that it is easy to understand that a decrease of temperature is the slowing down of this vibration. As you project the slowing down of the vibration of atoms, you understand that there is a minimum temperature below which it is not possible to go. That is called absolute zero. Actually you cannot even
touch absolute zero because you can only go ever closer to it.
When you start measuring, the measurement itself warms up the thing that you are measuring. So absolute zero is a point that you had never taught, but you get always ever closer to it in the universe. In intergalactic space, there is also a statistical measure of temperature. We call it the background radiation of the universe and is about three Kelvin above the absolute zero, corresponding to minus 273.16 degrees in the Celsius scale. From the freezing temperature of water, you go down below zero 273 point 16 that would be absolute zero. And the temperature of the universe today is three degrees above that. The three degrees of temperature of the universe is the consequence of the cooling down of the universe as it expanded over 13.8 billion years after the big bang. How cold do the superconducting devices need to be in quantum computers in order to function? 10 Kelvin? 5 Kelvin? They need to be 1 thousandths of a Kelvin!
These are temperatures that are not present in any place in the universe, nature and all the phenomena that nature produced did not find a way to generate these kinds of cold temperatures. This is the reason why the coldest places in the universe that we are today producing are needed to maintain the delicate structures of our current quantum computing devices so that their superconductor features can allow the components to exhibit the superposition and entanglement that we need, in order to make use of them.
The units of traditional computers are called bits and they can be represented in the status of transistors that let current pass or not pass. The equivalent units in quantum computers are called qubits or quantum bits. The physical implementation of a qubit depends on the particular architecture of the quantum computer that it lives in. If it is the case that over the course of the past eight years and more, several traditional computer architectures were explored, but starting in the 70s, we have definitely settled on the so called Von Neuman architecture based on electronic components at an increasing degree of integration, central processing units, memory chips and so on, with quantum computers, the field is still open around what will be the winning architecture.
We still have both analog quantum computers and digital quantum computers in the various research and development laboratories. There are optical approaches to quantum computing as well as various ways alternatively for electronic charges to be used for the computation itself. Whatever is the approach and independently from the particular hardware implementation, every quantum computer shares the characteristic of leveraging the ability to create a superposition of the various possible states for each qubit as well as the entanglement of the superposition states of several qubits together.
The superposition of the various possible states of a qubit means that contrary to a classical bit, which is exclusively, either zero or one, the qubit is able to internally store the probability amplitude represented by a complex number, of all of the various possible outcomes, whether the outcome is going to be zero or one statistically or a combination of those possible outcomes. In a traditional computer the ability to encode a series of bits increases exponentially with the number of components. The number of possible combinations of numbers represented in an entangled quantum state of superpositions, the possible combination of parameters and states represented, explodes joltingly.
If we have a traditional computer with three bits, that is going to be able to represent two to the third power states which is eight. If we add an additional bit, the number of states in will be incremented to 16.
With a quantum computer, the increase in the number of states is not merely exponential, but super exponential. With every qubit added, we are multiplying many fold the number of possible states that are represented, not merely doubling it.
Designing the qubits as components of electronic circuits with superconductivity or in the optical versions using photons, our ability to create quantum circuits follows the learning curve of electronics itself. So as you add the ability to build ever more powerful quantum computers via the learning curve of electronics to the quantum computer itself becoming more powerful out of proportion with the increase of its components, it is easy to realize that quantum computers are truly a jolting technology.
The first quantum computer that was announced outside of academia in an industrial setting, so that you could put a few million dollars down and buy one was the D-Wave One by D-Wave computers, a Canadian company, in 2007. I was there at the presentation that was very exciting, very emotional, and also very controversial because D-Wave beat everybody in announcing the availability of their computer. A lot of the traditional participants who had planned a research program for the next 30 years were extremely critical and extremely skeptical of their announcement. The best was that at D-Wave they were mistaken. The worst was that they were a fraud. Having personally come to know Geordie Rose, the founder, I knew that he would not expend the energy, the intellectual talent and the passion that he did in building his company if he didn’t truly believe in it.
There would be easier ways to scam, but it was for D-Wave to prove that indeed, their computer was a better solution to problems than not what traditional computers could do.
I used the D-Wave One computer first in 2010 when together with Alex Lightman, I organized the H+ Summit at Harvard University, for the optimization of the schedule of 50 speakers over the course of two days. We had dozens of different constraints, which certainly didn't require a quantum computer, but it was a very fun exercise that Geordie very kindly and happily sent over to his team who then came back with the solution to how the various speakers’ slots should be assigned in order to satisfy all those constraints. These kinds of optimization problems are especially adept for the kind of computer that D-Wave built then and keeps building.
Quantum supremacy means that for every practical purpose, the problem that you are trying to solve cannot be solved with a classical computer, because it takes too much time, but a quantum computer will be happily able to do that. And quantum supremacy is the industry's benchmark to confirm that they have made the kind of breakthroughs for practical applications of quantum computers that are going to then make them desirable to be produced and sold.
Already in 2015, the Google NASA joint venture, the Quantum AI lab published some astonishing results where using the D-Wave computer for certain types of optimization problems, they achieved a hundred million fold speed up compared to the same problem being run on traditional classical computers.
And in 2019, they officially announced to have achieved quantum supremacy on a different type of quantum computer of their own design, achieving a trillion fold speed up compared to a classical computer working on the same type of problem.
IBM, a competitor to Google in the field of quantum computers shortly after the announcement, strenuously objected. They affirmed that rather than a classical computer needing tens of thousands of years to complete the calculation that the Google computer completed in 2 minutes, it only needed several days. What this objection avoids to mention is that the power of the quantum computer explodes by adding merely a handful of qubits and even if IBM were right in today, and the quantum computer is only a few thousand times faster, in a year's time it will be a few millions time faster, and then a few billions times faster soon after that.
There are problems that are completely intractable through classical means, and they are surprisingly common. One example is the so-called traveling salesman problem. What is the shortest route that a traveling salesman should follow while touching each city once? As the number of cities increases, the computation needed becomes unfeasible rapidly. These problems and many others are such, that not even using all the matter in the Universe to create a classical computer we would be able to solve the before the Universe itself dies. Quantum computers instead are expected to be able to address them, and achieve optimal results rapidly. Any corporation or nation state that is able to reliably produce a quantum computer, will gain an important competitive advantage.
That is why the US, the EU and China as well are investing billions of dollars in the funding of quantum computing research.
What are going to be large, extremely useful classes of problems that quantum computers will be used for natively? These are not going to be the compiling of census data that gave birth to mainframe computers. It is not going to be for writing a novel or emailing or web browsing, which are the tasks for personal computers. They are not going to be for playing like a games console, or video chatting or photo sharing or navigating on a map like we are doing on our mobile phones.
So we have to complete an interactive search to make sure that we find those problems that these computers are good for. We can already start thinking about them, we can start trying to understand what they're going to be. A first target is a class of problems that are naturally involving quantum mechanics. For example, simulating quantum systems in material science such as high temperature superconductors, so that we have better materials to design quantum computers. In chemistry for molecule design, or in biology for protein folding and designing better antibiotics or better living systems of some kind, that we haven't been able to dream about because of the lack of the tools that we had around, and certainly many, many more.
Geordie Rose, the founder of a D-Wave, is known for stating that software itself is what I would call a jolting technology. Jordie says that if given the choice, whether to use current algorithms on a computer from 30 years ago, let's say an Apple II or to use the algorithms from 30 years ago on the fastest computer today, he would pick the first.
But developing software for quantum computers requires a completely different mindset, a completely different set of tools. We don't have quantum software code editors. We don't have quantum software programming languages. We don't have quantum software debuggers or testing suites, and best practices. Very few people learn how to program quantum computers. There are some, for example, there is a Canadian company called 1 Qubit that specializes in developing quantum algorithms running on quantum computers.
You don't need a quantum computer in order to develop quantum algorithms: pencil and paper, a lot of passion and a lot of talent are also enough. But today you can have a leapfrog experience because Google and Microsoft and IBM and others are offering free access to their platforms in order to experiment with quantum computers. So if you find this field exciting and you believe that you can be curious and persistent enough in order to learn about it and then maybe contributes to its advancement, you can start today and you can start learning about it for free with incredibly powerful platforms that are available online.
When traditional computers were starting to be designed it was not at all clear that it would be possible to scale them. The initial configuration of components was very delicate and the electronic circuits’ error correction mechanisms needed to be designed in order to counteract the various possible mistakes that the individual components of the circuit could introduce in the calculations. This was necessary for the flourishing of the electronics industry. With quantum computers we are at the same level, where we realize that the error rate in the quantum circuits is very high, the so-called noisiness of the system. We need to counteract this by introducing quantum error correction mechanisms. These are expected to be much more difficult than what has been achieved in classical computers. They will need to be implemented differently for the different types of approaches that are being taken in the design of quantum computer architectures. The architectures that are radically diverse,
not one of them having achieved a winning position clearly indicating that everybody else is going to embrace that particular solution. On top of the need for error correction, there is another intrinsic issue with the output of quantum computers. This is directly related to their power. What can we do in order to verify the results of a quantum computer? The classical computers which at least theoretically could verify those results, take thousands, or billions of years to confirm that they are correct or not. It has been mathematically demonstrated that for many kinds of problems there are no shortcuts. So we don't have a way of reformulating the problem that the quantum computer resolved in a certain way, and to cross check the result. It will be similar to the way in artificial intelligence systems, where explainability is likely to be addressed by specialized AI systems whose task is to understand and describe the inner workings of others. It is likely that the verifiability of
complex quantum computers is going to be delegated to specialized quantum computers as well.
The ability to communicate in the knowledge that other parties, beyond the intended recipient of our communication, cannot intercept, corrupt or impede the communication itself, is of fundamental importance in many fields. Cryptography is the discipline that is tasked with achieving these results. There have been important advances since the times of the Romans, when we have the first cryptographic systems we know of, deployed in times of war, throughout to the 20th century, when a top secret effort, using the first specialized computers at Bletchley Park in England was able to decipher secret German communications fundamentally influencing the outcome of the Second World War.
Today we are relying on cryptography to secure communications in many different areas, including financial communications with banks, ecommerce transactions, confidential commercial and industrial communications, legal electronic communications between law firms and their clients, diplomatic communications, and so on. Even though the mathematical approaches that have been used and implemented throughout our worldwide electronic communication infrastructure have evolved, and are sophisticated, it is expected that they will be vulnerable to be decoded by future quantum computers. As a consequence, there is an effort underway to upgrade these algorithms, and to design specific ones that are so called quantum safe. This means that they employ mathematical principles that do not exhibit the weakness of the current ones, and which are not going to be falling under the onslaught of quantum computers.
A complementary subject is quantum cryptography itself, employing quantum phenomena to secure communications. Here there has been important progress, both in Europe and in China in setting up communications channels. In the first case across the Mediterranean between Malta, and Sicily. In the second case, somewhat more spectacularly between a base station on earth, and a satellite in orbit. These quantum communications are demonstrably impossible to decode by quantum systems or otherwise. Not only, while these systems take advantage of some of the phenomena that have been already described, another one is also contributing to the impossibility for quantum secured communications to be intercepted. The communications are such that their entanglement is unavoidably destroyed in the presence of a third party. So the interception itself interferes with the communication in a manner that the sender and the recipient will necessarily realize, and the attempt by a third party will be revealed.
Thank you for watching the modules of the K&L Gates Technology Seminar on Quantum Technologies. We also organized a Q&A session where you can ask clarifications, make comments, but also vote on the questions asked by others. You are able to also act on the knowledge acquired by using the numerous resources that are collected and curated, and are available to you, both in the descriptions of the individual videos as well as separately. It is really just the beginning. Thank you and see you in the next unit of the K&L Gates Technology Seminar Series.
The acceleration of technological change is what is driving the world today. It is what you see in your everyday life. And it is the basis of a lot of a questions that are unanswered and sometimes on answerable because the way that the change in many sectors interacts, makes it really complex and even those who are labeled experts should be honest enough to say that it is, for every practical purpose, impossible to give precise answers about how the future is going to unfold.
There are people who are in a very unenviable situation like the regulators, or politicians, or teachers, or heads of families, or CEOs. Each of these are elected, or appointed, or find themselves in a position where everybody else pretends for them to have the answers. That's what their job is, that's what their role is, but it is not possible.
On the left side of this image you can see one of the first prototypes that Google built for self driving cars. In the future, we will have self driving transportation of very many different forms. And yes, self driving cars are coming. But what is the relationship with that other image that wants to represent a synthetic human heart? So, the relationship is that the only source today of replacement organs that people need are from the people who die in car accidents and when self driving cars will become widely adopted, people will stop dying in car accidents and we will have a scarcity of replacement organs. So, those that need a heart, a kidney, an eye, a lung and many other things that today we harvest from people, especially the young people who die in car accidents will say: "Hey, where are my replacement organs? I need them!”. And an entire new industry of 3D printed synthetic organs is now being born which will be not only necessary, but vastly better than what we have today.
Because there will be no immune reactions rejecting the implants, as they will be coming from your own stem cells. There will be no waitlist, while people today die without getting the organ they need to survive, because it doesn't arrive in time.
Very few people realize that one leads to the other. Or how many of you have had your DNA sequenced by 23andme or ancestry.com, or any other service? This past Thanksgiving or Christmas, it was one of the most popular gifts in the United States. When all the family gets together, you receive a little vial, you spit in the vial, you mail it, and two weeks later you find out all the strange things.
All of these technologies that we will be covering in this series are very dangerous, absolutely, very, very dangerous, just like fire. And when 100,000 years ago, after we started playing with fire, we never stopped, right? Definitely as we discuss all of these technologies and their uses, we will have to look at how we are measuring their risks, as an enterprise, as well as a society and then saying: "Are our decisions aligned with the type of risk that we want to take? Are misaligned, and we are taking more risks than we should? Are we applying excessive precaution leading us to do less than we could and maybe we should, giving up on the cumulative future benefits that we could enjoy?”
So, one of the types of risk that DNA sequencing tries to manage better than before are illnesses that occur on a very strong genetic basis. Still no certainty, but it is expected to be enough pretty soon to completely disrupt the business model of the insurance companies that not only won't be able to cope with self driving cars, because they are not going to be in accidents anymore, but they are not going to be able to cope with the universal availability of genetic information. People who decide to not get an insurance because they don't believe they will develop the kind of illness that the insurance covers because of their genetics, or the opposite, that somebody says: "Yeah, let me take out this insurance because my genes tell me that I need it”. Both of those are against the business model of the insurance companies.
But it doesn't matter. Because who understands, not in detail which is impossible, but the underlying mechanisms and dynamics of what is going on, those are able to build trillion dollar values, which is what we are seeing in the stock markets, in the market capitalization of the most valuable companies. Actually, I was part of the group that designed the Singularity University, an organization that analyzes technological change and teaches it, born at the NASA Research Park in California with funding from Google. One of its founders is Peter Diamandis. His fourth book is entitled "The Future Is Faster Than You Think”. With our students we used to have our lectures during the day on campus, and then fireside chats in the evening. One of the fireside chats led by Peter is called: "Who Wants To Be A Quadrillionaire?” Not a millionaire, not a billionaire, not a trillionaire, but a quadrillionaire, giving a glimpse of the unbounded opportunities that the future holds for all.
The simplest example of exponentials that you are all familiar with, is the way that our computers are getting more and more powerful. As time goes by the typical assumption is that they double their power every couple of years. This is not the natural law, like universal gravitation driving how the apple falls, the orbit of the moon, or other laws of physics or chemistry. It is a self fulfilling prophecy, where engineers all over the all over the world are competing with each other, but with a common objective of making computers ever more powerful and they are able to overcome every challenge that they meet. These computers have become our smartphones, and now they are disappearing in the environment to constitute the nodes of the next generation of networks, the Internet of Things. But I want to show you that the mechanism of exponential change is actually everywhere.
The Human Genome Project started in 1985, and it was supposed to last 15 years, and had a budget of $3 billion, a pretty sizable project. The objective was to decode the first complete human genome. Regardless of the precise numbers, year after year, they were making very, very, very little progress. As a matter of fact, after seven years, they were just at 1% of the final goal, and even the experts were very much worried. They were almost in a panic. They were saying: "After seven years, halfway through to our original goal, we are just at 1%. We don't have 700 years to complete the project. We don't have $150 billion, that appears to be needed. This is crazy, probably we have to give up!”. It's a disaster because they were in a deceptive phase of exponential change, where their linear mentality was driving them towards expectations that were underwhelming. They didn't realize what was actually going on, when at a point the exponential change would cross a threshold and become
visible. It was the same rhythm already before but people were just ignoring it. At the year seven they were at 1%, at the year eight they were at 2, at year nine 4, ten at 8. And then 16, 32, 64. And right on time, right on budget, in the year 2000, they triumphantly announced that they had achieved the goal of sequencing an entire human genome.
But just because that magical hundred percent was reached, technology didn't stop. And another 15 years later, the same could be achieved instead of an effort of 15 years of duration in a couple of weeks, instead of $3 billion for $2,000. And still, technology doesn't stop. And already the machines are being built, that when deployed in a few years, will allow the sequencing of an entire human genome for an incremental cost of a couple of cents practically in real time.
So, what are the consequences for the world now, and tomorrow? You and your clients are living in this world of science fiction where I guarantee there will be corporations and technology companies and government agencies that will be using this technology.
Already a few years ago, some of you may remember there was a public scandal: IKEA was selling meatballs containing horsemeat. Now in Europe people eat horses, so it wasn't a big deal. But in America, people do not eat horses. And when they go to IKEA and have their meatball, they don't expect to be eating horse. So it was a big deal to them. And IKEA apologized, they said: "Oh, we are very sorry, we will monitor our processes better”. But I actually went and checked what was the PR company behind those articles and who that PR company was working for. Can you guess? The PR company was working for the makers of the DNA sequencing machines, because you could never check the DNA of a meatball before, but now you can check the DNA of a meatball. It's not that it didn't have horse meat forever in the meatballs in the past. Do you know what happens in the slaughterhouses? We don't even want to know. But regardless, now, we can change the end result. And very smartly, they used this new
kind of knowledge to talk about some of the issues raised by this, as a new industry is being born.
There is a paradigm shift moving from accelerating technological change following an exponential curve to jolting technological change.
What is a jolt? The jolt is the measure of the rate of change of acceleration.
Imagine a rocket. The engines are at full power, and the rocket brings on board both its propellant and its oxidant. As they chemically combine and the rocket expels them to achieve its thrust, its mass will diminish and the diminishing mass will lead to an increasing acceleration during the ascent.
There are technologies today that are not only merely accelerating at an exponential rate. Their acceleration is increasing, they are superexponential, they are jolting.
We have to get ready for their world changing power!
Technology creates change. This accumulates slowly at the beginning, almost imperceptibly. It appears easiest to think it is just linear change. The rate technological change is changing: these are accelerating technologies. When the accumulating change goes beyond a threshold, suddenly it becomes undeniable, disruptive, overwhelming. Accelerating technological change is what Singularity University analyzes and shares in its conferences, and courses.
The mathematical formula for accelerating change is an exponential function. For example 2^x. Exponentials are often represented graphically on a logarithmic chart, where they show up as a line.It is possible to prepare for the disruptions of exponential technologies, to become an exponential organization.
Jolting technologies are those where the rate of technology acceleration is increasing. As the generations of technologies develop faster, deployment is faster and more comprehensive, network effects stronger. But they can incubate undetected for a long time and then burst forth very rapidly.The mathematical formula for jolting technologies is superexponential. The graphical representation of a super exponential on a logarithmic chart will show an exponential curve, where the exponential curve will be represented by a line, and a linear function as a logarithm. For every given unit of time, the jolting value will increase an increasing amount.
We are barely starting to be prepared to cope with exponential technologies. We are completely unprepared to face jolting technologies.
The statistical interpolation of data points hide local variations in every model, including jolting technologies. The deviations from the model in jolting technologies can be super exponential as well. If you hated the stock market flash crash, wait until the world is dominated by jolting technologies.
We must rapidly develop a series of methodologies to cope with the consequences of jolting technologies changing our society and redefining our world. Our image of the world can rapidly become out of sync with reality. Are you sure you know how the life in thriving cities of Nigeria, China, or India is going to be in ten years, if you live in Europe or North America and don’t travel and don’t read?
There are many examples of this in computation, communication, cognition, transportation, biology, and elsewhere.
Example 1: The simplest example is the increasing acceleration of a rocket with the engines at full power, and diminishing mass as it consumes its fuel during ascent. F=ma, at a constant force with a diminishing mass the acceleration will increase.
Example 2: Another simple example is when you realize you need to brake harder to stop in time before you hit the car in front of you, and your deceleration increases.
Example 3: The cost of DNA sequencing is decreasing at a super exponential rate. (Except for recent market gauging close to patent expiration).
Example 4: Quantum computers are improving at a jolting rate: their chips follow a variant of Moore’s Law, and their output is an exponential function of the increase of their components, the qubits.
Example 5: AIs designing AIs, neural networks improving neural networks are a jolting technology that is going to disrupt how we think about cognitive tasks. See Generative Adversarial Networks.
How will jolting technologies impact established industries? They will be more surprising, more unbelievable, more disruptive, harder to prepare for and to cope with than the changes we’ve seen with exponential technologies. Here are a few forecasts on my side. Let’s look back at them in a few years.
Forecast 1: 5G mobile networks together with LEO satellite swarms will have a jolting effect on communications availability on a worldwide basis. They will drive applications such as artificial reality that have been incubating for decades, to the forefront of market adoption.
Forecast 2: Self driving electric cars will have a jolting effect on the transportation industry. Most car companies of today will not survive, with the exception of Tesla and one or two others. Self driving cars at the same time will create entirely new industries that we unimaginable while we had to have a human included in the equation all the time.
An increasing rate of jolt is the jounce. I fully expect jouncing technologies in a postsingularitarian world.
The fact that the exponential change is disrupting the industries where it starts is going to be illustrated in additional examples. I want to highlight that at the beginning, people who overlook the trends are not stupid. It is very easy to say that things are going to continue as they did forever and in the noise of ongoing competing ideas, not to be able to identify the one that is going to be the winning one.
People also look at what we are saying, and they are telling us that we are naive. In a finite world, every exponential is going to hit against the limits of the physical availability of space, energy and resources, and it will peter out, so what are we talking about anyway. They don't realize that what we are talking about is not a single curve, but is the seamless continuation of a series of technologies that design the exponential that we are identifying and talking about.
That is, for example, what happened with computers. The electronic brains of the end of the 40s, 50s and the 60s that were working with mechanical relays, electric relays, and vacuum tubes, with limitations that appear incredible today. Their parts were very unreliable, and each morning when they were turned on, a certain amount of hours needed to be dedicated to switch out parts that were not working anymore, before they could be used. There would be a natural upper limit to their size, when more than 24 hours would be needed to even have them start working!
These components were then superseded by much more reliable, compact, fast, and affordable transistors and integrated circuits. Today you read articles in the popular press saying: "Oh, it's over! Moore’s Law, as it is called, cannot continue. Computers are not going to be more powerful, It's the end, sorry". But the journalists writing those articles concentrate on a single generation of the technology, because yes, we are miniaturizing the circuits. so small that we are literally hitting against the atomic limits, they cannot get smaller. We are starting to try to make computation rather than a flow of electricity with single electrons, and the very basics of our circuits and computers behave completely differently at those scales.
But the quantum computers are around the corner, machines that rather than pretending that those phenomena don't exist, take advantage of them. They are built in order to use the quantum phenomena that destroy the other types of calculations.
Just as with hardware, similar things have been happening in the world of software. Puch cards were really so difficult to use that only a priesthood of white coated specialists could try and touch them, and then feed them to the computers. A little later, computers started to become interactive, but still very arcane. Only those that were very passionate about trying to understand how they worked, would learn that and apply themselves to it. But after a while, computers started to understand how we organize our knowledge and how we work, that there are documents and folders, that actually we like color, and we have an aesthetic sense, we work better If computers show us things that are beautiful.
Just recently, they started to become independent from the physical limitations of the dimensions of our hands. No more keyboards, and we can have screens that are very small, or huge. Computers are starting to understand our motions, our gestures, our emotions. We are getting accustomed to talk to computers and for computers to be talking back, the basis of the smart speakers that are in many homes.
Who doesn’t hate it when you receive a three minute voice message on WhatsApp? Tell the people sending you those recordings that rather than use the microphone in the WhatsApp field, they can use the microphone that is on the side of the keyboard, then start talking. The computer will write what they say, and for them it will change nothing, but the recipient will take 10 seconds to go through everything that they believe is so important, recognizing which three words or the last two sentences that are enough. So it's a win win. That is for example what universal speech recognition makes possible.
The technology is going ahead and already people are working with advanced human computer interfaces that are brain computer interfaces to understand the thoughts to read our mind, but also to write our mind. It is not going to take 100 years, it is not going to take 50 years, it is not going to take 30 years. Maybe it will be a little more than 10 but probably less than 20. Telepathy is going to be the next WhatsApp! So get ready to debate with your partner what it means, when you don't want to share your thoughts with him or her. Are they ready to reciprocate? And for many of us it is going to be permanently weird. Today we have kids who have no problem at two or three years of age to play with an iPad and for them, digital natives, this world is very natural. Those who are going to be born in a world of universal telepathy will also take it naturally as they grow up with it.
We believe that we are superior to the Romans who were so ignorant and barbaric, and to the obscurantist Middle Ages, who understood nothing, they were burning witches. In terms of what we are and who we are, we are very likely the same people. The difference is the technology that we have available. It is technology that makes societies that are different from the past possible.
If you asked a Roman slave building the Coliseum, if his life was just, he would confirm that it wasn’t. However, the follow on question, if he could imagine a world where slavery was abolished, would astonish and maybe even enrage him, as even the slaves would believe their condition to be a necessary part of society. Wouldn’t a victorious Spartacus sitting around the fire in a liberated enclave start looking to pick who would be the slaves the next day?
We outlawed slavery. Legal ownership of people was abolished in all countries over the course of the last two centuries. We as a society, and universally everywhere in the world understood that it was important to decide that people were not objects that we could buy and sell. Then complying with that, as with any other law is a different question. In half of the countries enslaving another human being cannot be persecuted, as there is no criminal law against it on the books.
What made this possible is the mechanization of human and animal muscle, the introduction of tractors and combines in the fields, and similar machines in many other industries. These were not designed, produced, sold and used on moral grounds. Their owners were not necessarily abolitionists. But the productivity that they allowed, so far outpaced that of farms based on slaves, that these just could not compete. The abolition of slavery, on top of being the right choice, also became the convenient choice.
What is going to happen in the world, which is already happening today, when the very foundation of the 20th century civilization, the hydrocarbon industry, oil and carbon are being driven out by technological change? Because any energy generation facility that is based on natural gas, carbon or oil is anti economical. There is nothing cheaper today than solar energy generation. Together with wind and hydroelectric generation, as well as large battery installations to smooth out their ebbs and flows, renewable energy is taking over. We are going to become a sustainable civilization, not because it is the right thing for the planet, but because it is an economically superior choice.
How are we going to adapt? How is civilization going to change? How is the geopolitical balance that has held for the past 70 years change?
Energy; digital manufacturing and 3D printing; hydroponics, precision fermentation in food production; personalized health; peer to peer learning; the world of decentralized finance; new security models, with the ability to incentivize compliance; policymaking, the very technologies of regulations and the creation of consensus.
Each of these are multi trillion dollar industries that are changing as we speak radically, and they are going to change more in the next 10 years than they have changed over the course of the 20th century.
How does the structure of our democracy adapt to this change? When Churchill said: "Democracy is the worst kind of government, except every other type” we laughed! "Oh, yeah, the old chap is joking haha”. Actually, it was a challenge. And for the past 90 years, we were not confident enough, we didn't have the leadership skills to take on that challenge and improve the way that our society works, in order to be able to build a civilization worthy of the opportunities and the ambitions of the 21st century.
It is commonly assumed that the pace of technological change is fragile, and that conflicts and wars often slow it down. Actually the data shows that the exponential pace of improvement of technology has been going on in an uninterrupted manner for a very long time. Even in the largest conflicts of the 20th century, World War I and World War II, we find support of this, with the war effort even leading to inventions like radar or jet engines, that we are using universally today.
The unstoppability of these changes and especially their accelerating and jolting pace requires a new set of tools, a new kind of thinking. We are looking to analyze these tools and the new kinds of thinking. It is my objective for you to acquire, and then to exercise them so that you are confident as you speak in your community: "Watch out, because the mechanisms that you are facing have an exponential and jolting nature.” You won't have all the definitive answers, and all the details of what needs to be done. It doesn't matter whether we are in Milan in New York or in Hong Kong. It doesn't matter if the organizations impacted by the changes are small or huge.
Because the CEO of Google, or any other large corporation is in the same situation of uncertainty that we feel today. You can be at the pinnacle of your industry for literally 100 years, and still miss the innovation that could pave the way for another 100 years of successes. Kodak invented the digital camera without believing in it, and it went bankrupt, pretending that the digital camera could not ever perform. The year they fired the last 10,000 people and declared bankruptcy was the year that Instagram was acquired by Facebook for a billion dollars and the 12 people that created it. Nokia created hundreds of different mobile phone models for every possible niche, all with different kinds of features. When a company that never played in the telecommunications field announced a phone that didn't even have a keyboard, Nokia was laughing. And then it died. It died laughing, actually died crying. Because the last CEO, was literally crying at their shareholders meeting, when he said:
"We did nothing wrong, and we are still in this situation. We did nothing wrong”.
But that's kind of the point. Since there are no given answers, part of the recipe is to make things that don't work, to experiment. That is the reason we need all of the brains, creativity and human talent. Maybe we don't even have enough people, maybe we need 20 billion people to make all the mistakes that are needed, to then be able to generate the answer that works for that given challenge. Whether it is the asteroid on the trajectory to hit Earth or whether it is climate change, or the next pandemic.
And then of course, the bureaucracies are the most risk averse organizations that resist with all kinds of excuses.
People are going to get hurt. "Oh, the poor, and I'm looking at you, the poor are stupid. We can't allow them to invest, only the rich can invest! Because the rich people are smart!” This is nothing but a state of panic. Just like your immune system, if you are allergic to nuts, that says "You want to eat the nut? I'd rather kill you then allowing you to have a nut”. It's an overreaction.
That is what happened in the state of Hawaii. Solar panels became so popular that under the legislation that required connecting them to the grid, the utility of the island said: "I cannot take more energy!” Solution? No more solar installations, no more solar power in Hawaii for an entire year or two, until thee were ridiculed so much from all over the world that they changed the legislation.
An example in biology and health, where 23andme was prohibited from selling its gene sequencing kit to consumers by the FDA because, just like with the Latin language Bible of the Middle Ages, none of you should be allowed to access the sacred text of your DNA without the intermediation of the priesthood of professional physicians. Actually the kind of knowledge that could be built on this DNA ten years ago was superior than not what you can get today.
In the field of finance with what happened in the state of New York, that after three years created a framework legislation for blockchain startups, called the Bitlicense, that was so expensive that it was cheaper to start a traditional bank than not a crypto company. You could be sitting with your laptop in New York connecting to a service with your browser and the geo targeting of your location would tell them that you were a resident of New York and the service would say: "Sorry, we don't want to touch you. We realize that New York used to be the center of financial innovation in the 20th Century, but your legislator decided that it was more important to protect the incumbent industries killing innovation than embracing change”.
Since technological change is unstoppable, panic reactions are totally useless. What we need is a better understanding of the dynamics of risk, which is what we have been doing, we have been evolving, for tens of thousands of years, a better understanding of the risk.
If I left the cave, and went left or right, 10 thousand years ago, by night my family would know that I was eaten by the saber-tooth tiger because I took the wrong turn and that was it. If I told my wife that I had the entrepreneurial spirit in the Middle Ages and I wanted to open a tavern and she said: "Yeah let's go for it” and it didn't work out, failing wasn't something that people could be proud of back then, failing meant debtor’s prison and you would die in debtor’s prison after a few months, and your family would be reduced complete poverty.
Now the trust that we gain by measuring risk, came at a cost of centralized authorities that impose trust by force. So for example, when I hand you over 10 euro you take it not because you trust me, you trust the European Central Bank and the fact that police would have gotten to me if the banknote was falsified. But the kind of trust that we need to seek is much more resilient, and that is the kind of a new organization that in this case is able to give us superior results, just like understanding and organizing how risk has to be measured. We are getting better and better in how we learn to learn, our schools have improved, we understood for example just little more than a hundred years ago that universal education was important enough to be mandatory. If you don't send your children to school, the police will come and knock at your door and potentially arrest you in many countries.
We still have to improve though as the return on the investment of college education in the United States is negative and has been negative for decades, statistically speaking, over the totality of the population. I am sure that all of you improved your lifelong earnings more than the interest payments on the eventual debt that you incurred for gaining the certificate that enables you to be employed. But that is not the case for the students in the US who accumulated 2 trillion dollars of debt, that they are never going to be able to pay back. It's a new kind of indentured servitude because not even personal bankruptcy can eliminate the debt. Actually, if a young person dies the parents inherit the debt to be paid off.
The investment that society allows or even imposes to be made in the improvement of the skills of individuals is so that we can apply those skills not only in the execution of the currently clear and well defined tasks. Society encourages to an increasing degree the execution of a diverse and parallel set of experiments, what we call entrepreneurship, so that the inventions in basic science can be brought to fruit as applications in different industries.
So how can we learn to learn better? That is the most important role of artificial intelligence, which is going to be the theme of or one our next units. Better Hardware, more data and better algorithms are giving us new tools that we can use to make every possible process better. And it is going to be a simple recipe, whether for a law firm or a manufacturing firm or a service business, everything with AI is going to be better. It's something like what happened with electricity a hundred years ago: you would be able to look around and say: "Wow, I need light! Let's use electricity! Wow II have machinery to move, let's use electricity!” With cars it took us a little more but finally we are electrifying our transportation as well.
With jolting technologies the applications that would have appeared almost magical a few decades ago, and certainly magical a few hundred years ago are now happening with many things around us. Many of you have seen the videos on YouTube of Boston Dynamics. A few years ago the Advanced Research Projects Division of the Department of Defense of the United States set up a contest for robots that needed to open doors, step on wobbly terrain and they performed horribly. So badly that you can find on YouTube compilations to laugh about these robots. And we do laugh, they are funny but this one, at least as far as I am concerned, performs at a superhuman degree somersaults, acrobatics. It is doing things that I would never be able to do. And we can be sure: it is never going to get worse, it is only to get better.
For AI applications, the performance of systems, rather than doubling every two years, in 2012 it started doubling every 3-4 months. Since 2012 in the past 8 years as a consequence, rather than a 30 fold improvement, we have had a 300,000 fold improvement. AI systems became 300,000 times better.
The traditional definition of technological singularity is a moment in the future where self modifying artificial intelligence is going to be able to apply itself to goals, uncoupled by human control and supervision. It was originally formulated by Vernor Vinge in 1993 and popularized by Ray Kurzweil who put it to symbolically 2045: Ray revised his figures recently, moving forward the date to 2038. I set myself a calendar reminder that I must remember to update as well.
So, what is happening is complex, exciting, frightening, dangerous but still we have to understand it, it is our job, whether we are called Kodak, whether we are called Nokia, whether we are called the next company that is not going to be alert enough to try and understand what is going on. And each of us can do various things. I for example have a chip implanted in my hand that is freaking a lot of people out. From a technological point of view it's cool. It's a little glass vial that contains a miniature computer that can communicate, calculate, store and it is sitting here in my hand, actually you can feel it under the skin. When I am physically together with people I tell them that if they touch my hand if they want. Some people want to do that, others never, saying "I don't even want to think about it, it's just horrifying”. And that is the point of this experiment, because you are literally touching the limits of your adaptability. I'm not saying like you will read in whatever
Internet conspiracy theory pages, that at any time, any government is going to impose that you must have a chip like our dogs.
But what if, in 10 years, if you don't have a cognitive coprocessor, your competitors in another firm that have a cognitive coprocessor implanted in their brain are going to beat you because they are a thousand or a million times faster in collecting and correlating data and making sound decisions? But whatever you do, those that adopt this kind of thing will beat you just like Instagram killed Kodak. Since you deserve the choice to be freely made, the question that society has to ask is what is going to be our attitude towards those people that want to opt out that don't want to adopt these technologies? They're blaming a brain implant? What are we going to do especially if the percentage of those people is going to grow to be 20%, 50%, or 80% of the population?
Those of us that want to innovate and invent will be able to adapt, because we will adopt these solutions or these experiments and many of us are excited enough to want to spread this excitement, to want to spread the knowledge in order to share what we learn because we believe that it gives increasing degrees of freedom, improving choices in a world where each of us any of us can go from idea to action very very rapidly. The barriers to entry are only psychological. The opportunities to participate are unbounded, the opportunities to create value, to try what works, and it really just depends whether you want to be part of it.
Welcome to the new season of the context. In this season, we will talk about a lot of different topics around technology and how it impacts society, which is what we have been doing in the previous seasons, as well. But we will introduce new elements. A lot of you reached out to me and wanted to provide advice on how to make the context even more compelling. And we have taken that advice and incorporated it in this new season. So I welcome you. And let's get into the topic of this week, which is the future of work. A lot of us are concerned as we hear about artificial intelligence, robotics and automation. We believe that these developments in society are going to Cause a disruption that we are not going to be able to cope with. And the accumulating tension is going to cause disorders, social upheaval, maybe even violence or wars. And, of course, I don't know, if it is going to be able, it is going to be possible to universally prevent any of these from happening or all of these from
happening. But I do believe that the dangers of these transformations are a seen superior to what reality is going to bring. Yes, these technologies are very powerful. However, human talent human creativity, human ingenuity, human passion, human ambition. are going to be a part of the future, regardless of how powerful the technologies that we surround ourselves with, and show themselves to be. The very simple reason is that we will take advantage of the tools and expand the dreams that we can realize we will be able to be more ambitious, using those technologies, rather than being limited by the inferior technologies in what we can do. And as our ambitions and opportunities expand, that will represent the new kind of work, the new jobs and the new ways of both making a living as well as building a dignified life. That will support more and more people. We don't have to fear AI and robots and automation, we have to embrace it. Because the lives of too many people are limited by the
insufficient technologies that are at their disposal. Think of the life of a farmer in the Middle Ages. And his choices were extremely limited. It is illustrated perfectly, how limited these choices were, by the fairy tales that we inherited from those ages that talk about princesses and dragons and the Seventh Son, killing the dragon marrying the princess inheriting the kingdom. Those fairy tales embody the sheer impossibility of all of those dreams. And we today can be proud of the fact that so many things that would appear as part of a fairy tale to someone from the Middle Ages are indeed part of the daily lives of ourselves, as well as an increasing number of people all around the world. Think about it, the possibility of studying the possibility of receiving nourishing food and clean water, the possibility of planning ahead and knowing that if you do invest in your future, you will reap the benefit of that investment.
Today, we have the ability to communicate with practically anyone all over the planet and we can undo Stand each other better thanks to platforms and tools of communication and coordination, we can form groups. And these groups can achieve their goals, projects that are ambitious, concrete, and generate value. These kinds of remote collaboration wasn't possible until 10 or 20 years ago, and today is very effective. And this remote collaboration started may be in very specific types of jobs like that of a developer who could just code away on his keyboard and looking at his monitor, and then deliver the results the code periodically every day or every week, to the team that he's part of, or in a in a independent fashion as a freelancer. to the client. Today, we understand that sales and marketing project management, but also design and many other tasks are digital and can be delivered not only within the environment of an office, but anywhere in the world and our understanding of how
the various themes can be coordinated optimally, how the rhythm is the emotion, the passion of a team can be channeled towards high productivity is also perfected. We have tools in order to chat in real time to move away from the more cumbersome medium of email to these types of modern communications. That's still accumulate knowledge within the group that is not lost, but it's searchable. It's well indexed. It's threaded in the conversations that are divided in various groups or channels and so on. So, as the tools evolve as our understanding of how the tasks evolve, it is evident that open opportunities are available to everyone. We just have to grab the opportunities and to make sure that we leverage them. That is why I am so excited to be involved in one of the most advanced projects that live at the convergence of these various strengths. remote work, flexible work, online collaboration, digital jobs. And it is toray t o double r e Torah in Spanish means tower, founded by Alex
negra, whom I've known for 10 years. And who is a successful entrepreneur originally from Colombia, who founded two companies, each in a specific niche in online work and online corroboration to his insight further at the layer of abstraction, and realized direct, which is a platform for your professional genome. It is a platform for finding remote digital online work, but also finding other teams for finding talent. If you are On the other side, rather than a job seeker a talent seeker and Dora uses artificial intelligence in order to analyze and match at a high confidence and individual to a job, but also individuals to teams. So that it the skills that you want to develop further develop the skills that you are not interested in developing are not part of your job description, the matching of certain characteristics of the individual with the themes that they will find themselves working in, so that the satisfaction of both is maximized. churn is reduced, and the effectiveness and
Dora as an organization itself, is doing what it preaches. It uses story for finding talent, but also It is a remote distributed team with team members in over 20 countries and investors in over four continents, in many countries in the world, in the company is expanding very rapidly. There are as I speak today, over half a million members so on the platform growing exponentially, and opening your profile is very easy. It incorporates very lean and modern techniques in order to rapidly create the profile but also to create connections rather than the old fashioned method of networking with the cumbersome confirmations. Link a that you can find in platforms like LinkedIn. For example, Tory uses what it calls signals, the signals indicate that you are looking forward to be working with someone in the future, maybe a war with their organization. And together with hundreds of other data points, Torah enables you to create a profile that is rich, lean, effective. The platform, of course,
is at the beginning, it will enrich itself with so many opportunities for those of us who are looking for enriching our lives in making work fulfilling in the future. Think about it. Is it acceptable, that if you do a survey 80% of the people will respond that they find their work. Boring, not engaging, unfiltered In the future, not too far, but rather close, we will be able to turn this completely around, all of us will find jobs that will give dignity to the individual and the communities that we live in. And that will enable us to thrive, to acquire new skills to apply our curiosity to keep learning in order to provide value to the society that we are part of. And this is open and everyone ever. And this is already available to everyone today. There are 5 billion people of working age on the planet, and 4 billion of them are still without a professional profile. So there A lot to do. And together, we can go ahead on this path where artificial intelligence robots and automation are
not enemies, but allies in order to make work fulfilling for everyone.
The field of Artificial Intelligence is very important and the hype in the media around it goes exactly in proportion with its importance. That is why understanding its implications matters, in order to be able to make the right kind of decisions on how to apply it and when to apply it to our own business.
Agritech, the application of technology to agriculture, is 10,000 years old since we invented agriculture; but the application of Artificial Intelligence to agriculture of course is much more recent. Started in the universities, or with basic R&D funding, it is only recently that Artificial Intelligence has become approachable, applicable, and practically useful with measurable positive effects in a widespread manner.
But that also means that applying it to agriculture and farming, and the design of new crops, and the analysis of what their development and deployment means, is now something that can represent an important competitive advantage on the side of those organizations that are able to take advantage of it. BIOX is among these organizations; enlightened by its own desire to understand the implications of leading-edge technologies, it is going to be able to discern what is hype and not to be uselessly attracted by it, in the pursuit of unachievable results as of today.
And on the other hand to identify what are the tools, and how to apply those tools usefully in its processes right now. So: the course on Artificial Intelligence for BIOX is important and actionable, and I welcome you to enjoy it, as well as to ask questions and vote on questions by other participants, so that in the interactive session we will have fun, and we will learn even more, after you watched the videos of the course that give you a solid basis to start.
Has the time come for universal basic income from being a niche, initiative or thought process that only a few people have understood and embraced, basic income initiatives have enjoyed experimentation all over the world over the course of the past 10 years. And with the pandemic, they have seen a decisive acceleration and a rollout in many different nations. If you think about it, basic income is part of a long wave of civil rights and human rights that are progressively recognized, and then become an ingrained and accepted part of what it means to build a humane society.
Stephen, good afternoon. Hi, Steven. This is David Orban from Torre. How are you today? I'm good. I'm sorry. This is who? David Orban from Torre. Okay. Right. And I sent you an email that I would be calling. And of course, you didn't confirm. So apologies. If I did that, regardless, we spoke about the investment opportunity in our HR tech startup, and quite extensively about, you know, 10 days ago or so. And I wanted to follow up, because you asked a lot of very good questions, and I sent you access to our data room in order to be able to, to see, you know, a lot of a lot of details around it. And I wanted to make sure that I would be able to answer all your remaining questions and then receive from you confirmation of your investment interest, ask you how much you would like me to reserve in the current round and make sense for what, gotcha. So David, as I told you last time, and we'll tell you one more time, email is the worst way to get in touch with me, you can continue to do that
if you'd like to all as well. But I, like it's over 500 emails a day, and a lot of them just get scraped away. So if you ever need me, for something that's important, the best way to do is just to push your auto dial button and leave me a voicemail message, I get an answer all of my voicemail messages every day. So just say it. Okay, second thing, the information that you receive, you sent me outside of the data room I have been in unable to get it is it is not enough information for me to make any kind of decision regarding that, that's for sure. The only thing I can tell you is I'd like to, I'd like to do more. I'd like to I'd like to move forward with gathering information. This is not I'm not wasting your time. And I'm not wasting my time, I don't think, but I'm gonna have to meet with your founder. I won't be able to invest before I meet with him. So if he's not meeting with anybody at all, then I'm out all as well. It's very early for us. This is a very early investment. I don't
even know whether I can describe exactly what it is that you guys are doing yet. I don't know that I know enough about it. Of course. And and Yes, sir. Dora Nagar is available to meet you. We can set it up. And I'm happy to indicate to him that you are seriously interested. And did I catch it correctly that you were or you were not able to get into the rate data room? I'm not sure I understood. I haven't tried to get into the data room yet. Because I don't know enough about what I'm what I don't know about what I'm looking for yet. So the information that you sent me the attachments, or the links that you sent me, as I go through them? I don't know that I know a lot more about what it is that you all are doing that I knew the first time I talked to you, or the only time that I talked to you. It's just it's very early in your lifetime. Yeah, yes. So so the investment is a seed stage investment. So we don't have an EBIT de we don't have we haven't even reached what was called the
product market channel fit, we are evolving towards it. But 500,000 people on the platform are finding value as well as the people hiring talent on the platform already, as well. And Alex sort of a graph built to other companies with hundreds of millions of dollars of business generated. So he knows what he's doing right? We already have the $5 million in our pre seed and $5 million in the seed. We tree expanded for $2 million, which is the reason I'm talking to you because that is where there is space for you to receive an allocation under the safe plus warrant vehicle that we configured and, and and all of this is in the data room. So how what is The best way? Can I send you a text message with the link to the data room? Would that work? Would you send me a link to the data room in your original email? What are the two other companies that are bigger did the created? Yeah, voice 123 which is for a vertical marketplace for booking, voice talent for jingles and voiceover and
audiobooks and all these kinds of projects. And the second one is a more general but still vertical marketplace for creative projects that involve entire teams. So these need would be a client and indeed, is a client. And they book, you know, from the scriptwriter to the editor to the videographer, etc, very rapidly and very effectively, and not for the name of that company. And that is called a bunny studio. Bunny. Yeah, he lives in y. Correct studio. And voice 123 years revenues were recently at what level? Roughly? I don't know the details and the history. The company's generated hundreds of millions of dollars of projects, right, then exactly how it is booked. Whether you book The Commission on those projects and account for that, etc. I'm not involved with those companies, Alex may may be able to answer those questions in detail. They generate, you know, they are very profitable. And and both of them are are bootstrapped. Right. So Alex is an angel investor, he's a shark in the
Latin American edition of shark tank. He already had the unicorn level exits. And he put in $5 million of his own money in Torah, because he believes it's it's important enough for him to run it rather than give it to someone else. Like some other projects that the venture studio that he created spun out. So into voice 123 wasn't sold. No, no, no, no, they are still up and running, they are still throwing off millions of dollars per year. But Dora is going to require several rounds of funding, given its mission of bringing literally billions of people on the platform. And so already, Alex is in conversations with first year VCs, from Sequoia to Andreessen Horowitz to Founders Fund, etc. Who are going to be one of them is going to lead the a round of funding, q2, let's say next year. And of course, the details are unknown, they will need to be negotiated. Anything could happen. But taking into account the, let's say $12 million, between pre seed and seed of threat, the a round is going
to be proportionally large, let's say between 30 and $40 million. And it is at that point that the CD investors will convert through their safes receiving equity, and they will have the ability to exercise their warrants. So the warrants are giving you the right to buy 16 shares for every hundred dollars invested. So let's say how some million dollars gives you the right to buy 80,000 shares for 20 cents each so $16,000. And if you run the scenario, presuming to sell 20% of the company receiving $30 million at the a round. Then the CD investors on paper gain is a three x between safe and warrants. Of course no liquidity at all. But that is that is what the conversion is going to look like. David, you're way ahead of where where I am right now again, I just want to be want to make sure that I'm always communicating with you effectively because I'm not worried about my game. I'm worried about trying to understand weather. Excuse me, I'm trying to worry about whether I can even
understand what it is that we're doing here. What, what, what the goal of this company is, I still don't understand. Its main. Sure. So the goal of the company is to achieve market dominance in a very fragmented and still under developed market, providing billions of people with a professional profile through which they can manage the evolving trajectory of their professional lives. Where already today, everyone changes work at least three, four times in their professional lives, if not more, and there are still 4 billion people who are at of working age. And they have no digital tools for assisting this process. You know, they use WhatsApp to chat, they use Facebook, to social network, they use whatever other tools for everything they do, regardless of where they are, it doesn't matter whether they are based in the US or in Pakistan. Okay, as of today, even taking into account that LinkedIn has been very successful, and has been acquired by Microsoft for $26 billion. The monthly
unique users are just 200 million people. So it is addressing 5% of this blue ocean opportunity. And then this is one side of the market. The other side of the market, of course, are corporations that are desperately looking for talent. And they spend a lot of money in recruiting, they waste a lot of time in select ingot hiring, and the process is inefficient. So toray uses its own approaches, ai based algorithms to match candidates to jobs and themes, improve the success rate of hiring, and easing the work of recruiters with both sides, being able to use the platform for free. And the monetization is going to come further down the line with several options for achieving, you know, generating revenue from the platform. Okay, so right now the strategy is to grow the platform and think about monetization later. Yeah, there are several, several, there are competing services like that now, so I need to I need to understand a little bit more about that. Okay. Okay, David. So, again, I
always pledged to you the beginning that I would be transparent with you. It's very early for me. I didn't work on it. As much as I thought I would last week, I thought I would work a lot more on it. So I need to I guess I need to get my questions together and call you back and see if I can get the answer from you. So I can move forward, or just cancel out and drop because it's very, very early for us. And as you can tell, it's totally fair, I completely understand and looking forward for your questions, or the confirmation that you are unable to proceed. Yes, sir. All right. So I'll try to set aside some time tomorrow morning to do this. And then very early morning, and hopefully. Well, should I get rebalancing to do so? no commitment for you for this week, but early next week? Yes. Wonderful. Looking forward. Thank you very much. And as soon as you confirm, you know, at whatever degree, your interest or your comfort. I of course, am happy and able to set up a meeting with Alex as
well. You got it. Thank you, sir. Thank you. Bye bye. Yes, sir.
When new technologies emerge, it is always an important question, how to allow the market that they enable to express itself? And how if regulators need to intervene in order to make sure that the market is healthy, with a plurality of players competing in order to provide the best possible service to those that use the products in this new market? Is it possible to see and and find examples where the regulator's provided the right kind of incentives? Is it possible to find other examples, where the regulator's apparently failed, and the market finds itself in a situation that is certainly less than optimal? Yes, examples are plentiful. And sometimes, as it happens, these examples clarify and maybe provide some guidance for the future. But probably, they are not a perfect recipe for understanding what should be done. So what is the role of the new technologies, they certainly change the rules of the game, they make abundance something that was previously scars and whether we are
talking about physical resources or access to some kind of service. Also, the new technologies make business models possible, where a the relationship between suppliers and intermediary producers, and the incomes and consumers gets completely restructured. And finally, of course, most importantly, new technologies make novel products and services possible things that literally look magical, if we were able to see them with the eyes of the past. So onto some examples. The European telecommunications market used to be extremely fragmented, and extremely expensive. And even though mobile phones it too crude in Europe, and got adopted faster in the 90s, than for example, in the United States, with both handsets and services, this innovation slowed down, I remember that I would travel to the US and look at the local choices available both in terms of the contracts that carriers would provide and in terms of the handsets available, and they would be tremendously inferior in the 90s than not
what was available in Europe. Not only the variety of handsets provided, for example, by the likes of Nokia, but also the business novel innovation originated by Omni toe in Italy of the prepaid scratch card base the contract Well, actually no contract just your ability to pay 2040 whatever amount of well, then it wasn't even euros, right? local currency in the various countries and make the calls in minutes that was correspondent to that amount was a is able to bring a new generation of mobile phone users that wouldn't underwrite a monthly payment for one or two years. contractual obligation otherwise, then is the announcement of the iPhone completely upended the everything. And in terms of handsets, a Europe fell behind a no one until Android came along with a wide variety of hardware producers could compete with the An iPhone. But what happened in Europe is interesting because
as the European Union tries to integrate its market ever more tightly, it the opportunity was there to step in for the regulator, and make sure that communications between the member states would not be hampered by antiquated understandings of how the interconnection fees should be charged among players, and then typically passed on to the consumers. So basically, the regulator said, Listen, guys, to the carriers, I give you a given number of years ago off to that you cannot have consumers pay roaming charges, if they are calling from one country in the EU to another country, or vice versa, or when they are traveling, they have to get the same service and the same charges. Regardless of where they are within the EU. This is very reasonable and very advantages, both to consumers and businesses, who are not exposed to the complications and asking themselves, oh, I will have this extra cost if I am calling this person, or I will be traveling, I need to buy a local SIM card and swap it
out, and so on and so forth. And it is a fantastic step in the right direction for enhancing the collaboration and the business opportunities and tourism and everything else within the countries belonging to the European Union. Of course, in the United States, it is like this, and it has been for a long time in the types of contracts that you signed with carriers do not include a long list of US states. To call to Massachusetts, it's this much to call to taxes is this much. If you travel to California, you will spend this much in data and that much in mobile minutes, and so on. None of that crazy complication would be understandable. And it would severely hamper the healthy development of the communications market. And I don't know right now, but last time I was in India, it was still a case that you couldn't get an Indian mobile phone, you would only get a mobile phone of the state where you were traveling. And if you then this is a huge subcontinent a huge country of over a billion
people. If you then flew from New Delhi to Mumbai, or you went to go or very well else your phone number either would stop working, or you would end up paying roaming charges, or you would have to resort to buy a new SIM card and change from Well, except if you had like I had the Google phi service, which with a single cause was even then able to provide very, very low cost communication, overcoming the these local quirks. And you can imagine if India were able to adopt the kind of regulation that across all the states of the nation would eliminate these roaming charges. It would be an extremely positive development for the country, both for consumers and for businesses. And at the end, of course, also for the carrier's themselves, who would see the numbers of people who adopt third, fourth and fifth generation, network services and handsets really blossom. Another example is pretty important to my friend Cory Doctorow and to millions of people who enjoy audiobooks.
Today, audiobooks and podcasts have seen a blossoming because a when people do their chores at home or commute at work, or in many other ways, listening to them is enjoyable, convenient, you learn a lot, you follow a great story or a great storyteller. However, Amazon with its audible unit dominates the audiobook category with a type of exclusive licensing that hampers competition. And under costs, eliminates existing services such as the ability for libraries to land, their collections, to the patrons that visited their, their their offices, or their buildings, whether physically or digitally. And, and Cory, it launched his latest book, ethics service surface on a Kickstarter, especially to break this stronghold of Amazon, so that as many people as possible would pay for an audiobook that is not on Audible, in order to show his publisher that it is possible not to adopt the audible contract of exclusive audiobook distribution. But it is possible for his publisher to support authors
that wants to make their books in audiobook format to be available everywhere.
I supported corys, Kickstarter and Cory was also a guest on searching for the question and live where we spoke about this in depth. But the reason I mentioned this example is because the emerging market of audiobooks and podcasts may see the necessity of a regulator stepping in because of the markets failure to stop monopolistic practices to be enforced by these exclusive contracts that do not benefit the author, the publisher and the public. For example, if I am blind, and I want to go to a public library, and go home with an audiotape, I can do that. And listen to the audio tape of it know what author from the past. But if I go to the library, and I want to ask for the audiobook of a modern author, the library cannot lend it to me, I cannot listen to it. And my opportunity to participate in modern culture is severely restricted and hampered by this lack of competition, lack of choice, and due to the lack of intervention from the regulator that left the monopolistic practice to
entrench itself in the market. So these are reasons for the regulator to actually act and to do it rapidly. Of course, the regulator is in a difficult situation, because new technologies are difficult to understand by the practitioners themselves, let alone policymakers whose job is is across the board horizontally for any technology, not just one, they cannot be a specialist in that they rely on the lobbyists telling them what are the best practices, what are the outcomes, and of course, those lobbyists do not represent the public interest in general. They represent specialized interests. So through these examples, I hope I illustrated that the choice is there, how and when to intervene, sometimes intervene rapidly, but always to make sure that any regulation that is implemented contains SUNSET clauses that force the policymaker to revisit, measure the effect of the regulation, updated and then implemented. New with an improved the set of incentives and conditions so that the market
can adopt the new technology and choices can flourish and multiply
and we get
to enjoy improved products and services in the future as well.
Welcome to the Jolting Technologies Seminar Series. This unit is a foundational unit that highlights the nature of technological change, and its impact on society, business, and the lives of individuals. I recommend to watch the first few units in the seminar series in sequence, to acquire an understanding of both the approach as well as examples of technologies that will necessarily interact with others that are mentioned, through the examples given.
The unit is divided in various modules, with each module consisting in a short video dedicated to a specific topic.
These are also opportunities for deeper conversations around the topics covered, not only passively watching the videos, but to interact during the course of the next weeks and months. I'm also very easy to find online and you're welcome to connect on any platform that you use, I will be there already. Sometimes I won't be able to accept the connection request because I capped out the maximum number of connections potentially, but we will be able to interact in any case.
The units talk about Artificial Intelligence, Quantum Computing, Blockchain and Cryptocurrencies, Internet Of Things, Digital Manufacturing, Renewable Energy, Electric Transportation, Genetics, DNA decoding, CRISPR, Space and Satellite Communications, Decentralization and innovative Business Models, and many others. A lot of very interesting topics, each of them with profound implications in terms of what are the challenges they represent, in terms of their adoption and implementation, the disruption in business models, the regulatory and legal frameworks, each as it is already understood or that need to be better analyzed and understood.
It is important to accept that most of these changes are unstoppable. As a consequence forward looking organizations and individuals like you, must be ready for those questions and challenges that necessarily arise from them. Too many are not, they don’t even know what questions they should be asking, let alone developing the ability to find and implement the answers. You are well positioned to adapt and to thrive, given your curiosity, passion and talent in understanding the future.
You have many tools at your disposal, and I encourage you to take advantage of them, as you explore more deeply the issues that we illustrate. You can watch the videos, read their transcriptions, ask questions during the interactive Q&A sessions, as well as vote on the questions asked by others. There are videos from guest speakers, recommended readings, collections of bookmarks, and articles, for each topic, as well as interactive charts of the concepts illustrated.
You also receive a free copy of my book "Something New” that gets deeper in many of the implications of the changes that we see around us.
We are going to start with what is probably the most important concept that you will need to familiarize yourself with and we will periodically revisit because it is so fundamental. It is the basis of practically everything that we are going to see. This is the nature of exponential change as opposed to linear change.
Welcome to the course on Quantum Technologies.
Quantum Technologies are a hundred years old and you would be justified to think that they have nothing to do with your daily life. Nothing to do with your work and even less to do with Agriculture and Agritech. Well, you would be mistaken. Yes, it did take a long time before something as the and Abstract about the nature of our reality as quantum mechanics was able to arrive through to us in specific applications, but now it is indeed here.
In the course, you will not only learn why Quantum Technologies are going to have huge disrupting effects on the way that we look at technology and it's applications, but you will also learn about surprising examples that show you that Quantum is in your lives. Every day you are using applications based on Quantum Technologies and you wouldn't be able to do what you do without them. So welcome and let's start exploring the Exotic but exciting world of quantum Technologies.
What does it mean for Elon Musk to start from first principles? When he addresses a given technological challenge? Can we take inspiration from the approach and apply it to other problem sets in front of us? Tesla, last week, had a series of announcements, called the battery day, these announcements were the result of many different threads coming together, and culminating on the quite astonishing claim that within the next two, three years, Tesla is going to be able to reduce the cost of batteries per kilowatt hour by more than 50% and increase the range of its electric cars at the same time, by more than 50%. Also, that they are able to reduce the size of the factories, building the batteries, and the cars, increasing radically, the throughput. And the number of batteries or the electric storage in total of the batteries per year. Of these plants. What they were calling Giga factories are now called Terra factories. Furthermore, that once again, applying first principles thinking
they are able to completely eliminate the use of cobalt in the batteries. And to improve the sourcing of lithium to the point where a 10,000 acre area in Nevada that they already secured for extracting lithium is going to be able to provide what is needed for the electrification of the transportation fleet of the entire United States. Another announcement they made is that over the course of the next three years, they will develop and then make available a long range electric car for $25,000. Further, making important radical steps towards making electric cars universally accessible and able to replace and displace internal combustion engine cars. Now, there were so many enhancements, and so much innovation, that the market had a hard time absorbing it. After battery day, Tesla's stock actually declined 6%. Was the market expecting more? Evidently, was this expectation reasonable? I don't think so. So let's look at some of the reasoning that went on behind the approach. Let's look at,
for example, to the way that the batteries are designed and produced without going into excessive technical detail. battery technology is hundreds of years old. And it incorporates a lot of approaches that don't necessarily come from optimal solutions. They come from generations of generations of entrepreneurs and companies to inherit a given solution and make incremental changes to it until within that particular solution set,
it is quite good, but never asking themselves, if they went back to the dra