AI, Digital Transformation, and the Singularity

  • August 31, 2017
  • Scott
  • 2 Comments

A lot of talk in digital transformation circles, not to mention business process and decision management circles, is about machine learning – Artificial Intelligence.  The words are sprinkled into conversation like magic invocations, but it turns out that AI has a long history of practical application.

  • Search was long considered an AI topic
  • Rules engines are a former AI topic
  • Image recognition (and many other types of pattern recognition) is a former AI topic.

The old joke was, it’s AI until it works, and then you call it something specific that describes what it does for you.

This article – The Singularity is Further than it Appears – has held up well over the two years since it was originally published.  It’s basic argument against the “Singularity” – machine intelligence becoming self aware – is :

“Not anytime soon. Lack of incentives means very little strong AI work is happening. And even if we did develop one, it’s unlikely to have a hard takeoff.”

The basic issue is that no one has the economic incentives to write something that has the potential to be emergent. But he then circles back to the full explanation, including excellent references to Vernor Vinge.

I’ve bolded that last quote because it’s key. Vinge envisions a situation where the first smarter-than-human intelligence can make an even smarter entity in less time than it took to create itself. And that this keeps continuing, at each stage, with each iteration growing shorter, until we’re down to AIs that are so hyper-intelligent that they make even smarter versions of themselves in less than a second, or less than a millisecond, or less than a microsecond, or whatever tiny fraction of time you want.

But then he goes on to refute that idea of a hard takeoff quite nicely.  It’s a fantastic read.

It is also a reminder that rather than get caught up in the hype around Artificial Intelligence, we should probably focus on the benefits of more practical elements under that umbrella:

  • image recognition
  • sentiment analysis
  • OCR
  • Fraud detection
  • machine learning as applied to any number of business decisions with supervised training

The author even makes my point about Search:

“The most successful and profitable AI in the world is almost certainly Google Search. In fact, in Search alone, Google uses a great many AI techniques.”

And he points out that the many techniques used are not about to become sentient.

So our goal with cognitive computing and machine learning is to combine it with real world process and decision management software and techniques to bring value to your business.  And that doesn’t require worrying about the singularity at all.

Related Posts
  • September 19, 2017
  • Ariana
  • 0 Comments

What is Digital Transformation from BP3 on Vimeo. CEO, Scott Francis talks how BP3 helps companies operatio...

  • September 17, 2017
  • Scott
  • 0 Comments

Ray Wang wrote a great post on the subject of successful AI projects and the outcomes they seek almost a year ...

  • September 11, 2017
  • Scott
  • 0 Comments

The Harvard Business Review published "Leading Digital Transformation Is Like Urban Planning" - in which they ...

  • Alberto Manuel

    I tend to agree that the singularity is far away to be achieved. To machines arrive to a state of consciousness it will take long. Nevertheless there are some challenges around AI that are being ignored, like governance, if a life support system fails who is responsible for the loss? The company that provides the system? The person that wrote the code? Now what about ethics? Should a machine to make decisions on behalf of a human or not? Like ruling in court? There are studies that show the perils and danger in systems related with bias with regards criminal activity and how those systems increase the risk when someone that belongs to a minority is scored higher just because it belongs to such minority with higher criminal activity. In terms of possibilities #AI brings a huge new spectrum on how change and business transformation, but I also tend to think that many of the issues that I point are neglected. And that is just the tip of the iceberg.

    • Alberto –

      As usual, spot on analysis. So many implications change with AI. If you’re AI-driven car has an accident, should you be the one insuring it or should [Google/Uber/etc.] who wrote the AI and/or built the car? And pre-programming prioritization into computer algorithms is different, morally and ethically, than a human making a split-second decision the best they can in the moment, with no pre-meditation.

      I think you are exactly right about what’s being neglected. In fact, when you bring these issues up, often those who work in our business won’t confront them – either thinking that the world will stay the way it is, just automated, or that, say, the need for insurance will go away because their won’t be any accidents (these people do use software don’t they?! )