China and the U.S. have dominated AI development, raising concerns that other countries will lose out on its benefits and have no voice in devising regulations.

The escalating trade war between the U.S. and China is chilling global collaboration that has long driven breakthroughs in technology and science. The tiny island nation of Singapore is trying to carve out an independent role in the clash and demonstrate the advantages of cooperation in fields like artificial intelligence.

 

It’s a difficult balancing act. The country, with cordial ties to the two superpowers, is fighting against nationalistic forces on both sides. Artificial intelligence is becoming something of a test case for how independent countries will participate in emerging technologies.

China and the U.S. have dominated AI development, raising concerns that other countries will lose out on its benefits and have no voice in devising regulations. Yet Singapore’s government is investing S$500 million ($360 million) on AI and other digital technologies through 2020 and has attracted Chinese and American companies to the country with policies that support AI research. Singapore’s Communications and Information Minister S. Iswaran jumped into the debate this year, proposing a framework for the ethical use of AI at the World Economic Forum in Davos.

“Singapore has an important role to play,” said Lawrence Loh, an associate professor at NUS Business School. “We will never be able to match the technological prowess of the U.S. and China, but there are certain areas where Singapore can take leadership like using its position to get people to work together.”

Iswaran will elaborate on Singapore’s vision at Bloomberg’s Sooner Than You Think technology conference on Thursday. He will kick off an event that will feature speakers from Microsoft Corp., International Business Machines Corp., Temasek Holdings Pte, China AI pioneer SenseTime Group Ltd., as well as Southeast Asia’s leading tech startups Grab Holdings Inc. and Gojek.

Singapore has long positioned itself as neutral ground. It’s already home to the Singapore International Arbitration Centre and the Singapore International Commercial Court, forums for international dispute resolution. Prime Minister Lee Hsien Loong said in his annual policy speech last month that Singapore will maintain its neutral position and not take sides between the U.S. and China.

The affluent city-state of 5.6 million is not leaving anything to chance when it comes to future-proofing its economy.

It has set up a dedicated inter-agency task force to study all aspects of AI. And in recent weeks, it granted an AI patent to Alibaba Group Holding Ltd. within just three months — a record pace that underlines the country’s determination to move full speed ahead.

“Singapore plays a pivotal role as it facilitates our entry into markets of our interest rapidly,” Benjamin Bai, vice president and chief IP counsel of Alibaba-affiliate Ant Financial, said in a statement released by the Intellectual Property Office of Singapore.

Still, there is skepticism about the country’s prospects. Singapore, like several other countries, is making a genuine push to develop its AI ecosystem, but its effort is tiny compared with the giants, said Kai-Fu Lee, founder of the venture firm Sinovation Ventures.

“Unless Singapore can unify ASEAN and become the undisputed AI leader and supplier in ASEAN countries, its efforts will not lead to a fraction of the U.S. or China,” Lee said in an email.

The government has been stepping up efforts to lure companies working in AI.

Alibaba has opened its first joint research institute outside China in Singapore in collaboration with Nanyang Technological University, while Salesforce.com Inc. opened its first AI research center outside of its research and development hub of Palo Alto, California — adding to a growing list of new research centers including the Singapore Management University’s Centre for AI and Data Governance.

GIC Pte, Singapore’s sovereign wealth fund, has invested in Canadian AI companies, including Montreal-based Element AI Inc., which has set up an office in the city-state after raising $102 million in new funding in 2017.

“It’s very hard to see how things will pan out with the trade war,” NUS Business School’s Loh said. “Singapore’s focus should be technology, not geopolitics.”

Source: The Financial Express

 

An artificial intelligence that detects, tracks and recognises chimpanzees could make studying animals in the wild more efficient.

Arsha Nagrani at the University of Oxford in the UK and her colleagues have developed a facial recognition AI that can detect and identify the individual chimpanzees captured in video footage recorded in the wild. Using the AI, they can cut down the time and resources needed to track animals in their natural habitat.

The algorithm could help researchers and wildlife conservationists study the complex behaviours of chimpanzees and other primates more efficiently.

The team trained the AI on 50 hours of archival footage – spanning 14 years – of chimpanzees in Bossou, Guinea in West Africa. The footage of 23 chimpanzees, with estimated ages ranging from newborn to 57 years, yielded 10 million facial images.

The algorithm learned to continuously track and recognise individuals from raw video footage, says Nagrani.

Accuracy rate

It performed well even on low light and poor-quality images, and worked for images in which the chimps weren’t looking towards the camera. The AI had an overall identity recognition accuracy of 92 per cent, and correctly identified an animal’s sex 96 per cent of the time.

To test its ability against humans, the team then selected 100 random still images and tasked the AI as well as people with identifying the chimpanzee in each image.

The algorithm achieved an accuracy of 84 per cent, taking 30 seconds to complete the task. In comparison, researchers who were experienced in recognising the chimps took 55 minutes and had an average accuracy of 42 per cent.

The algorithm will allow researchers to more efficiently examine how behaviour and social interactions change over multiple years and generations of animals, says collaborator Daniel Schofield. “You can start to build up a social network,” he says.

By quantifying the interactions between individuals, they were able to track changes in community structure over time.

Though the team trained the AI on chimpanzees, it could be applied to other primates, says Nagrani.


Source: newscientist

In Collaboration with Huntertech Global

For us Python Software Engineers, there’s no need to reinvent the wheel. Libraries like Tensorflow, Torch, Theano, and Keras already define the main data structures of a Neural Network, leaving us with the responsibility of describing the structure of the Neural Network in a declarative way.

Keras gives us a few degrees of freedom here: the number of layers, the number of neurons in each layer, the type of layer, and the activation function. In practice, there are many more of these, but let’s keep it simple. As mentioned above, there are two special layers that need to be defined based on your problematic domain: the size of the input layer and the size of the output layer. All the remaining “hidden layers” can be used to learn the complex non-linear abstractions to the problem.

Today we’ll be using Python and the Keras library to predict handwritten digits from the MNIST dataset. There are three options to follow along: use the rendered Jupyter Notebook hosted on Kite’s github repository, running the notebook locally, or running the code from a minimal python installation on your machine.

If you wish to load this Jupyter Notebook locally instead of following the linked rendered notebook, here is how you can set it up:

Requirements:

  • A Linux or Mac operating system
  • Conda 4.3.27 or later
  • Git 2.13.0 or later
  • wget 1.16.3 or later

In a terminal, navigate to a directory of your choice and run:

# Clone the repository
git clone https://github.com/kiteco/kite-python-blog-post-code.git
cd kite-python-blog-post-code/Practical\ Machine\ Learning\ with\ Python\ and\ Keras/

# Use Conda to setup and activate the Python environment with the correct  dependencies
conda env create -f environment.yml
source activate kite-blog-post

Running from a Minimal Python Distribution

To run from a pure Python installation (anything after 3.5 should work), install the required modules with pip, then run the code as typed, excluding lines marked with a % which are used for the iPython environment.

It is strongly recommended, but not necessary, to run example code in a virtual environment. For extra help, see https://packaging.python.org/guides/installing-using-pip-and-virtualenv/

# Set up and Activate a Virtual Environment under Python3

$ pip3 install virtualenv
$ python3 -m virtualenv venv
$ source venv/bin/activate


# Install Modules with pip (not pip3)

(venv) $ pip install matplotlib
(venv) $ pip install sklearn
(venv) $ pip install tensorflow

Okay! If these modules installed successfully, you can now run all the code in this project.

In [1]:

import numpy as np
import matplotlib.pyplot as plt
import gzip
from typing import List
from sklearn.preprocessing import OneHotEncoder
import tensorflow.keras as keras
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
import itertools
%matplotlib inline

The MNIST Dataset

The MNIST dataset is a large database of handwritten digits that is used as a benchmark and an introduction to machine learning and image processing systems. We like MNIST because the dataset is very clean and this allows us to focus on the actual network training and evaluation. Remember: a clean dataset is a luxury in the ML world! So let’s enjoy and celebrate MNIST’s cleanliness while we can 🙂

The objective

Given a dataset of 60,000 handwritten digit images (represented by 28×28 pixels, each containing a value 0 – 255 with its grayscale value), train a system to classify each image with it’s respective label (the digit that is displayed).

The dataset

The dataset is composed of a training and testing dataset, but for simplicity we are only going to be using the training set. Below we can download the train dataset

In [2]:

%%bash
rm -Rf train-images-idx3-ubyte.gz
rm -Rf train-labels-idx1-ubyte.gz
wget -q http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
wget -q http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz

Reading the labels

There are 10 possible handwritten digits: (0-9), therefore every label must be a number from 0 to 9. The file that we downloaded, train-labels-idx1-ubyte.gz, encodes labels as following:

TRAINING SET LABEL FILE (train-labels-idx1-ubyte):

[offset] [type] [value] [description]
0000 32 bit integer 0x00000801(2049) magic number (MSB first)
0004 32 bit integer 60000 number of items
0008 unsigned byte ?? label
0009 unsigned byte ?? label
….. ….. ….. …..
xxxx unsigned byte ?? label

The labels values are 0 to 9.

It looks like the first 8 bytes (or the first 2 32-bit integers) can be skipped because they contain metadata of the file that is usually useful to lower-level programming languages. To parse the file, we can perform the following operations:

  • Open the file using the gzip library, so that we can decompress the file
  • Read the entire byte array into memory
  • Skip the first 8 bytes
  • Iterate over every byte, and cast that byte to integer

NOTE: If this file was not from a trusted source, a lot more checking would need to be done. For the purpose of this blog post, I’m going to assume the file is valid in it’s integrity.

In [3]:

with gzip.open('train-labels-idx1-ubyte.gz') as train_labels:
    data_from_train_file = train_labels.read()

# Skip the first 8 bytes, we know exactly how many labels there are
label_data = data_from_train_file[8:]
assert len(label_data) == 60000

# Convert every byte to an integer. This will be a number between 0 and 9
labels = [int(label_byte) for label_byte in label_data]
assert min(labels) == 0 and max(labels) == 9
assert len(labels) == 60000

Reading the images

[offset] [type] [value] [description]
0000 32 bit integer 0x00000803(2051) magic number
0004 32 bit integer 60000 number of images
0008 32 bit integer 28 number of rows
0012 32 bit integer 28 number of columns
0016 unsigned byte ?? pixel
0017 unsigned byte ?? pixel
….. ….. ….. …..
xxxx unsigned byte ?? pixel

Reading images is slightly different than reading labels. The first 16 bytes contain metadata that we already know. We can skip those bytes and directly proceed to reading the images. Every image is represented as a 28*28 unsigned byte array. All we have to do is read one image at a time and save it into an array.

In [4]:

SIZE_OF_ONE_IMAGE = 28 ** 2
images = []

# Iterate over the train file, and read one image at a time
with gzip.open('train-images-idx3-ubyte.gz') as train_images:
    train_images.read(4 * 4)
    ctr = 0
    for _ in range(60000):
        image = train_images.read(size=SIZE_OF_ONE_IMAGE)
        assert len(image) == SIZE_OF_ONE_IMAGE

        # Convert to numpy
        image_np = np.frombuffer(image, dtype='uint8') / 255
        images.append(image_np)

images = np.array(images)
images.shape

Out [4]: (60000, 784)

Our images list now contains 60,000 images. Each image is represented as a byte vector of SIZE_OF_ONE_IMAGE Let’s try to plot an image using the matplotlib library:

In [5]:

def plot_image(pixels: np.array):
    plt.imshow(pixels.reshape((28, 28)), cmap='gray')
    plt.show()
plot_image(images[25])

Encoding image labels using one-hot encoding

We are going to use One-hot encoding to transform our target labels into a vector.

In [6]:

labels_np = np.array(labels).reshape((-1, 1))

encoder = OneHotEncoder(categories='auto')
labels_np_onehot = encoder.fit_transform(labels_np).toarray()

labels_np_onehot

Out [6]:

array([[0., 0., 0., ..., 0., 0., 0.],
       [1., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       ...,
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 1., 0.]])

We have successfully created input and output vectors that will be fed into the input and output layers of our neural network. The input vector at index i will correspond to the output vector at index i

In [7]:
labels_np_onehot[999]

Out [7]:
array([0., 0., 0., 0., 0., 0., 1., 0., 0., 0.])

In [8]:
plot_image(images[999])

In the example above, we can see that the image at index 999 clearly represents a 6. It’s associated output vector contains 10 digits (since there are 10 available labels) and the digit at index 6 is set to 1, indicating that it’s the correct label.

Building train and test split

In order to check that our ANN has correctly been trained, we take a percentage of the train dataset (our 60,000 images) and set it aside for testing purposes.

In [9]:
X_train, X_test, y_train, y_test = train_test_split(images, labels_np_onehot)

In [10]:
y_train.shape

Out [10]:
(45000, 10)

In [11]:
y_test.shape

Out [11]:
(15000, 10)

As you can see, our dataset of 60,000 images was split into one dataset of 45,000 images, and the other of 15,000 images.

Training a Neural Network using Keras

In [12]:

model = keras.Sequential()
model.add(keras.layers.Dense(input_shape=(SIZE_OF_ONE_IMAGE,), units=128, activation='relu'))
model.add(keras.layers.Dense(10, activation='softmax'))

model.summary()

model.compile(optimizer='sgd',
              loss='categorical_crossentropy',
              metrics=['accuracy'])
Layer (type) Output Shape Param #
dense (Dense) (None, 128) 100480
dense_1 (Dense) (None, 10) 1290

Total params: 101,770
Trainable params: 101,770
Non-trainable params: 0

In [13]:
X_train.shape

Out [13]:
(45000, 784)

In [14]:
model.fit(X_train, y_train, epochs=20, batch_size=128)

Epoch 1/20
45000/45000 [==============================] - 8s 169us/step - loss: 1.3758 - acc: 0.6651
Epoch 2/20
45000/45000 [==============================] - 7s 165us/step - loss: 0.6496 - acc: 0.8504
Epoch 3/20
45000/45000 [==============================] - 8s 180us/step - loss: 0.4972 - acc: 0.8735
Epoch 4/20
45000/45000 [==============================] - 9s 191us/step - loss: 0.4330 - acc: 0.8858
Epoch 5/20
45000/45000 [==============================] - 8s 186us/step - loss: 0.3963 - acc: 0.8931
Epoch 6/20
45000/45000 [==============================] - 8s 183us/step - loss: 0.3714 - acc: 0.8986
Epoch 7/20
45000/45000 [==============================] - 8s 182us/step - loss: 0.3530 - acc: 0.9028
Epoch 8/20
45000/45000 [==============================] - 9s 191us/step - loss: 0.3387 - acc: 0.9055
Epoch 9/20
45000/45000 [==============================] - 8s 175us/step - loss: 0.3266 - acc: 0.9091
Epoch 10/20
45000/45000 [==============================] - 9s 199us/step - loss: 0.3163 - acc: 0.9117
Epoch 11/20
45000/45000 [==============================] - 8s 185us/step - loss: 0.3074 - acc: 0.9140
Epoch 12/20
45000/45000 [==============================] - 10s 214us/step - loss: 0.2991 - acc: 0.9162
Epoch 13/20
45000/45000 [==============================] - 8s 187us/step - loss: 0.2919 - acc: 0.9185
Epoch 14/20
45000/45000 [==============================] - 9s 202us/step - loss: 0.2851 - acc: 0.9203
Epoch 15/20
45000/45000 [==============================] - 9s 201us/step - loss: 0.2788 - acc: 0.9222
Epoch 16/20
45000/45000 [==============================] - 9s 206us/step - loss: 0.2730 - acc: 0.9241
Epoch 17/20
45000/45000 [==============================] - 7s 164us/step - loss: 0.2674 - acc: 0.9254
Epoch 18/20
45000/45000 [==============================] - 9s 189us/step - loss: 0.2622 - acc: 0.9271
Epoch 19/20
45000/45000 [==============================] - 10s 219us/step - loss: 0.2573 - acc: 0.9286
Epoch 20/20
45000/45000 [==============================] - 9s 197us/step - loss: 0.2526 - acc: 0.9302

Out [14]:
<tensorflow.python.keras.callbacks.History at 0x1129f1f28>>

In [15]:
model.evaluate(X_test, y_test)

15000/15000 [==============================] – 2s 158us/step

Out [15]:
[0.2567395991722743, 0.9264]

Inspecting the results

Congratulations! you just trained a neural network to predict handwritten digits with more than 90% accuracy! Let’s test out the network with one of the pictures we have in our testset

Let’s take a random image, in this case the image at index 1010. We take the predicted label (in this case, the value is a 4 because the 5th index is set to 1)

In [16]:
y_test[1010]

Out [16]:
array([0., 0., 0., 0., 1., 0., 0., 0., 0., 0.])

Let’s plot the image of the corresponding image

In [17]:
plot_image(X_test[1010])

matplotlib vector plot output for the image at index 1010 clearly represents a 4

Understanding the output of a softmax activation layer

Now, let’s run this number through the neural network and we can see what our predicted output looks like!

In [18]:
predicted_results = model.predict(X_test[1010].reshape((1, -1)))

The output of a softmax layer is a probability distribution for every output. In our case, there are 10 possible outputs (digits 0-9). Of course, every one of our images is expected to only match one specific output (in other words, all of our images only contain one distinct digit).

Because this is a probability distribution, the sum of the predicted results is ~1.0

In [19]:
predicted_results.sum()

Out [19]:
1.0000001

Reading the output of a softmax activation layer for our digit

As you can see below, the 7th index is really close to 1 (0.9) which means that there is a 90% probability that this digit is a 6… which it is! congrats!

In [20]:
predicted_results

Out [20]:

array([[1.2202066e-06, 3.4432333e-08, 3.5151488e-06, 1.2011528e-06,
        9.9889344e-01, 3.5855610e-05, 1.6140550e-05, 7.6822333e-05,
        1.0446112e-04, 8.6736667e-04]], dtype=float32)

Viewing the confusion matrix

In [21]:

predicted_outputs = np.argmax(model.predict(X_test), axis=1)
expected_outputs = np.argmax(y_test, axis=1)

predicted_confusion_matrix = confusion_matrix(expected_outputs, predicted_outputs)

In [22]:
predicted_confusion_matrix

Out [22]:

array([[1413,    0,   10,    3,    2,   12,   12,    2,   10,    1],
       [   0, 1646,   12,    6,    3,    8,    0,    5,    9,    3],
       [  16,    9, 1353,   16,   22,    1,   18,   28,   44,    3],
       [   1,    6,   27, 1420,    0,   48,   11,   16,   25,   17],
       [   3,    7,    5,    1, 1403,    1,   12,    3,    7,   40],
       [  15,   13,    7,   36,    5, 1194,   24,    6,   18,   15],
       [  10,    8,    9,    1,   21,   16, 1363,    0,    9,    0],
       [   2,   14,   18,    4,   16,    4,    2, 1491,    1,   27],
       [   4,   28,   19,   31,   10,   28,   13,    2, 1280,   25],
       [   5,   13,    1,   21,   58,   10,    1,   36,   13, 1333]])

In [23]:

# Source code: https://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html
def plot_confusion_matrix(cm, classes,
                          title='Confusion matrix',
                          cmap=plt.cm.Blues):
    """
    This function prints and plots the confusion matrix.
    Normalization can be applied by setting `normalize=True`.
    """

    plt.imshow(cm, interpolation='nearest', cmap=cmap)
    plt.title(title)
    plt.colorbar()
    tick_marks = np.arange(len(classes))
    plt.xticks(tick_marks, classes, rotation=45)
    plt.yticks(tick_marks, classes)

    fmt = 'd'
    thresh = cm.max() / 2.
    for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
        plt.text(j, i, format(cm[i, j], fmt),
                 horizontalalignment="center",
                 color="white" if cm[i, j] > thresh else "black")

    plt.ylabel('True label')
    plt.xlabel('Predicted label')
    plt.tight_layout()


# Compute confusion matrix
class_names = [str(idx) for idx in range(10)]
cnf_matrix = confusion_matrix(expected_outputs, predicted_outputs)
np.set_printoptions(precision=2)

# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=class_names,
                      title='Confusion matrix, without normalization')

plt.show()

 

Source: kite.com

Some of us take for granted our road maps, not necessarily acknowledging just how much work goes into them, and how quickly they can become outdated due to natural disasters, time, or simply new infrastructure. 

Facebook AI has been working well and truly hard to create computer and AI systems to facilitate the road mapping process around the world. 

Most maps are created for highly developed areas but don't take into account the majority of the world - which relies on dirt or gravel roads, or unpaved paths. 

 
See #AI's other Tweets
 
 

Google Maps and Apple have certainly been trying and perhaps doing their best at providing road maps, but their focus has been mostly on navigating big cities, allowing drivers, cyclists, public transport commuters and walkers to get to well-known businesses and addresses. 

Now Facebook is stepping in to help facilitate the bigger, less traveled upon, picture, working closely with OpenStreetMap.

Facebook engaging with the public for help

It doesn't simply take a team of researchers in a back room to update a street map. What Facebook is looking for is people on the ground, willing and able to assist it with improving its modern mapping services. 

Facebook Is Helping to Map Roads with Deep Learning and AI-Focused Tools
RapiD mapping experience. Source: Facebook

It's working on the project closely with OpenStreetMap (OSM) and helping validate their roads.

A perfect example of this has been Facebook's mapping of over 300,000 miles of roads across the entirety of Thailand. This, in turn, created RapiD, a machine-learning enhanced labeling tool that accelerates the process of putting down computer-readable roads onto satellite images. 

RapiD is an open-source extension of the web-based iD map editor, and it allows human reviewers to work on the maps. This helps with mapping out accurate road systems, and there are safety checks in place in order to ensure top quality results. 

Facebook Is Helping to Map Roads with Deep Learning and AI-Focused Tools
Left: results of the segmentation model per-pixel predictions; bright magenta means higher probability of the pixel belonging to a road. Right: Conflation of the vectorized roads data with the existing OSM roads (in white).(Satellite images provided by Maxar.) Source: Facebook AI

It's very important to decipher exactly which of the highlighted sections are indeed roads. The AI systems assist to validate these roads, with mostly images taken from satellites and ensure they're accurately positioned, as many can be mistaken for dry riverbeds, for example. 

 

Anyone can help map the world, by simply joining the OSM troops.

Source:Interesting Engineering

Microsoft recently announced plans to invest $1 billion in OpenAI, an AI startup co-founded by Elon Musk (no longer involved) and focused on developing human-level artificial intelligence. Normally, this would be an article about robots that can think and how two of the biggest players in the AI industry joining forces could lead to amazing things. But this deal is really creepy. So let’s talk about that instead.

First, OpenAI was founded as a non-profit. Under CEO Sam Altman the company’s recently rearranged to become what’s essentially a for-profit business nested under a non-profit with oversight power. In other words, as Wired reported, OpenAI can operate like a for-profit business but it has to adhere to the company’s original charter of developing AGI that benefits all humankind.

Interestingly enough, the company that OpenAI appears to be trying to catch up to with this deal is Google — another for-profit business once guided by the principle “don’t be evil.” Despite that motto having not worked out, Google‘s current resources and talent dwarf that of nearly every other AI venture, OpenAI‘s included. Enter Microsoft.

The deal seems like a marriage made in heaven: Microsoft has the coffers and hardware, OpenAI has Ilya Sutskever and dozens of other researchers who’d be the smartest people in most rooms. But, based on everything we’ve seen, this isn’t a joint research deal, or a developmental partnership, or even a pledge to work on the same problems together.

It appears to be a deal in which OpenAI will develop all or some of its technology on Azure for Microsoft to sell, distribute, or choose to open source. In return Microsoft will hand OpenAI cash over the next decade to eventually equal about $1B, but it expects to get all of it back as OpenAI pays for using Azure or other compute services.

Lol.

Here’s what Microsoft’s blog post on the deal had to say:

Microsoft and OpenAI will jointly build new Azure AI supercomputing technologies. OpenAI will port its services to run on Microsoft Azure, which it will use to create new AI technologies and deliver on the promise of artificial general intelligence. Microsoft will become OpenAI’s preferred partner for commercializing new AI technologies

OpenAI’s take was a little different:

OpenAI is producing a sequence of increasingly powerful AI technologies, which requires a lot of capital for computational power. The most obvious way to cover costs is to build a product, but that would mean changing our focus. Instead, we intend to license some of our pre-AGI technologies, with Microsoft becoming our preferred partner for commercializing them.

And Microsoft CEO Satya Nadella just made things fuzzier. According to the New York Times:

Mr. Nadella said Microsoft would not necessarily invest that billion dollars all at once. It could be doled out over the course of a decade or more…. will be fed back into its own business, as OpenAI purchases computing power.

What’s really going on here? Who knows. New York Times writer Cade Metz had a take that might explain it:

Cade Metz@CadeMetz
 

OpenAI will build narrower forms of A.I. in the meantime, like systems that aim to understand natural language: https://www.nytimes.com/2018/11/18/technology/artificial-intelligence-language.html 

Finally, a Machine That Can Finish Your Sentence

Completing someone else’s thought is not an easy trick for A.I. But new systems are starting to crack the code of natural language.

nytimes.com
Cade Metz@CadeMetz
 

With the deal, both OpenAI and Microsoft are looking for a little PR.

 
See Cade Metz's other Tweets
 
 

 

And AI expert Stephen Merity had an absolutely satisfying thread on the subject where he seemingly points out that the only thing keeping OpenAI in adherence with its non-profit, open-source roots is the fact that it promises not to act any different now that it’s ready to commercialize. Big tech promises eh, what are those worth?

Smerity@Smerity
 
 

What is OpenAI? I don't know anymore.
A non-profit that leveraged good will whilst silently giving out equity for years prepping a shift to for-profit that is now seeking to license closed tech through a third party by segmenting tech under a banner of pre/post "AGI" technology? https://twitter.com/tsimonite/status/1153340994986766336 

Tom Simonite
 
@tsimonite
 

Most interesting bit of the OpenAI announcement: "we intend to license some of our pre-AGI technologies, with Microsoft becoming our preferred partner."

Commercializing sub-human AI was previously described as an option “If the timeline is longer” https://www.wired.com/story/compete-google-openai-seeks-investorsand-profits/ 

 
315 people are talking about this
 
 

 

But, hands-down, the most entertaining take on the deal to be found on Twitter was a simple three-tweet interjection by Google AI expert Francois Chollet (opinions are his own, according to bio):

François Chollet
 
@fchollet
 
 

Many people in the AI community are confused by OpenAI's pivot from non-profit to for-profit, its cult-like, beyond-parody PR about "capturing the lightcone of all future value in the universe", and its billion-dollar partnership with Azure... https://twitter.com/Smerity/status/1153364705777311745 

Smerity@Smerity
 

What is OpenAI? I don't know anymore.
A non-profit that leveraged good will whilst silently giving out equity for years prepping a shift to for-profit that is now seeking to license closed tech through a third party by segmenting tech under a banner of pre/post "AGI" technology? https://twitter.com/tsimonite/status/1153340994986766336 

 
57 people are talking about this
 
 

 

François Chollet
 
@fchollet
 

Personally, I feel bad for the employees. It must be disappointing to sign up for a non-profit org that aims at doing open AI research in the public interest, only to find out a bit later than your job is now to make Azure a more attractive enterprise AI cloud than AWS & GCP

François Chollet
 
@fchollet
 

And on top of it, you are now part -- in the eyes of the world -- of a doomsday techno-cult...

 
See François Chollet's other Tweets
 
 

 

Is OpenAI a doomsday techno-cult? Probably not. But all of this creepy, PR-filled nonsense smacks of the kind of closed-door, marketing crap that makes idealistic developers leave big tech to work in academia or the non-profit sector. For many fans of OpenAI, this is like your favorite punk band selling out and becoming used car salespeople. I’m kind of hoping it’s a doomsday techno-cult instead:

Jack Clark@jackclarkSF
 

Are there any mystical groups that have conducted or outlined ceremonies in data centers, yet? Pagan Computation. Satanic Cable-Routing. Eldritch Free Air Cooling Infrastructure.

Tristan Greene@mrgreene1977
 

This is your best tweet.

 
See Tristan Greene's other Tweets
 
 

© copyright 2017 www.aimlmarketplace.com. All Rights Reserved.

A Product of HunterTech Ventures