General-purpose robots that may soon be able to assume manual tasks performed by astronauts in space. Programs for self-driving cars that understand human behaviour. Developing new drugs to fight cancer.
These are some of the novel ways in which ÎÚÑ»´«Ã½ companies are using machine learning and artificial intelligence – and to the clear potential benefit of humanity.
But like nuclear fission, machine super-intelligence is a Promethean power with the potential to be corrupted, which is why there is now a sudden push to erect guardrails and develop ethical guidelines and regulations before AI either becomes autonomous or simply falls into the wrong hands.
Elon Musk and Yoshua Bengio, a Canadian pioneer in deep learning, are among the more than 27,000 people who have signed an open letter calling for a six-month moratorium at all AI labs, until concerns about it can be addressed. Just last week, KPMG convened what might be described as an emergency summit in Vancouver to discuss AI and the opportunities and challenges this rapidly developing technology presents.
“The purpose of it was really to start a conversation around what’s becoming very clearly a very transformative piece of technology that is just accelerating in terms of its adoption,” said Walter Pela, regional managing partner for KPMG. “There’s obviously concerns and issues. At the same time, it is a tool that’s being adopted.”
In fact, it’s being adopted by businesses in the U.S. a lot faster than in ÎÚÑ»´«Ã½, according to a KPMG survey released last week.
“The pace in ÎÚÑ»´«Ã½ right now of AI adoption in business is about half of what it is in the U.S., according to a recent poll we did in February,” Pela said.
Vancouver does not have pure-play AI companies or institutes, like Montreal’s Mila research institute, but it has developed a hub of applied AI companies.
Computer scientists have been developing machine learning and artificial intelligence for decades. But it wasn’t until San Fracisco, Calif.-based OpenAI made its ChatGPT-3 chat bot available to the public that ordinary people got to see just how powerful this one type of AI already is.
The pace of Open AI’s progress has generated both awe and alarm.
Some of the concerns around generative AI programs, like ChatGPT, is that they could be used for fraud, cybercrime and the amplification of misinformation. Another concern is that its level of disruption – at least similar in scale to that of the internet, if not greater – could put a lot of people in creative fields and knowledge industries out of work in fairly short order.
ChatGPT is just one type of generative AI – technology that has the capacity to generate text, images, videos or music that look or sound like they were created by humans.
ChatGPT is text-based, and is basically like a super digital library containing a massive corpus of text from the Internet – a library with the ability to learn, to respond to commands and to write anything from song lyrics to HTML code for websites, all in about 30 seconds. You can ask it to write an essay on virtually any topic, and then, half a minute later, ask to have that essay rewritten it in almost any language.
Diffusion AI is a text-to-image model. Diffusion AI programs like DALL-E, Midjourney and Stable Diffusion have the potential to displace illustrators. In fact, that may be the biggest immediate threat that AI poses – not rogue machines turning their human masters into servants, but sudden, massive displacement of workers in certain industries, such as web design.
A Vancouver company called Durable, for example, uses AI for a program that can build basic websites for any type of business in 30 seconds.
“Any knowledge worker that is trained to do certain things – and already they’re interfacing in the digital realm – that’s the first thing that gets impacted,” said Handol Kim, CEO of Variational AI and a board director for AInBC. “So, content writers? Absolutely – already happening. Graphic design, already happening. Lawyers? Starting to happen. Accountants, starting to happen. Software developers? Already you’re getting decent code. It’s not great, but it’s not bad. Here’s the thing – it gets better. Next year, it will get twice as good. The year after that, it will get five times as good.
“Eventually it will be able to make movies. Anything that’s represented digitally and can be manipulated digitally, eventually it can get to a level that’s uncanny.”
“I think it’s fairly clear that there will be job dislocation in fairly short order, I think,” said Steve Lowry, executive director of AInBC. “For fastest change, I think we’ll see in the creative realm generative AI changing the job of designers, photographers, marketers like overnight basically.”
Though AI threatens to make some jobs obsolete, it also creates new opportunities – including jobs in applied AI.
A number of companies in Vancouver are using various types of machine learning and AI for a wide range of applications.
Sanctuary AI, a ÎÚÑ»´«Ã½ company co-founded by Suzanne Gildert and Geordie Rose – the founder of D-Wave Systems, which built the world’s first quantum computer – is using AI in the development humanoid general-purpose robots.
The company is using AI to develop a “cognitive architecture” for its robots that will “mimic the different subsystems in a person’s brain.” The company expects the robots could be used to replace humans to do work that is dangerous, tedious or in the vacuum of space.
“In the not-too-distant future, Sanctuary technology will help people explore, settle, and prosper in outer space,” the company said in a news release last year, after securing $75 million in a Series A financing round.
Inverted AI is a Vancouver company that uses deep learning and generative AI to understand the behaviour of drivers, cyclists and pedestrians, for companies developing self-driving vehicles.
Companies developing self-driving cars or advanced driver-assistance systems use simulators. Inverted AI helps to add the irrational human element to those simulations by recording traffic with a drone and then using machine learning to “learn” how humans behave in traffic.
“We record how people behave on the road, both as drivers but also as pedestrians, cyclists and so on, and we use that to improve the realism of simulations for self-driving cars,” said Inverted AI CTO Adam Scibior, an adjunct professor at the University of British Columbia’s computer science department. “We basically make those more realistic.”
Variational AI is using a type of machine learning – variational auto-encoder – to identify small molecules that will bind to protein kinases associated with cancer and tumors. But there are about 500 protein kinases in the human genome, all similar in structure, and finding the right molecule to bind only to kinases associated with cancers is a massive trial-and-error challenge.
“If you have a small molecule that binds to one kinase, it’s going to bind to many others, and you don’t want that,” Handol Kim explained.
Rather than hunt for pre-existing molecules, then, Variational AI uses generative machine learning to make new molecules. In other words, rather than trying to find the right key out of hundreds of options, Variational AI is using machine learning to just cut new keys.
The “generative chemistry” process the company uses has the potential to dramatically accelerate the drug discovery process.
It can take a decade and up to $1 billion to $2 billion to take a new drug through clinical trials and approval for use. Kim said using machine learning may be able to dramatically reduce both the time and costs associated with new drug discovery.
“What we’re trying to do is turn years into months,” Kim said. “We’re trying to turn pre-clinical development, move it from hundreds of millions of dollars to single-digit millions.”