THE LANGUAGE OF TECH

Machine learning is soon becoming a stale buzzword, after making the rounds in the mainstream media. Now, artificial intelligence is quickly replacing it as the go-to tech goal for forward-minded companies, hoping to implement robust forecasting.
Most decision-makers know better than following trends: they seek to understand what they mean and what are the opportunities. Yet, they cannot be expected to be experts in every domain, and thus may be forced, sometimes against their better judgement, to believe in the disruptive capacity of these new discoveries, and take a gamble, as to whether their next move will be successful.
Peacock Solutions’ philosophy is to always keep a clear understanding of why the current technologies are used the way they are. And so, following our principles, we’ll describe here how and why the tech trends are what they are now, where they’re heading, and the opportunities they will provide in the future.

 

MACHINE LEARNING

Nowadays, when we say machine learning, we mostly mean neural networks using back propagation and supervised learning. As machine learning is the more general term which covers neural networks, the former is usually used as a replacement for the latter; it’s common to hear it as an umbrella term: but the actual gold mine is the power of neural networks. Their efficiency has prompted many blog posts, and well into 2018, we find new solutions for problems that seemed impossible before.

Yet the field was, along with much of computer theory, developed in the 40s. The computers at this time did not possess enough computing capacity to perform theses calculations in a timely fashion, which may be understood as the reason why the 50s are not known today as the machine learning decade: the best choice at this time was often to design a more specialised, more restrained but more efficient algorithm. Nonetheless, the concept of neural network was already fleshed out by the mid-60s. In multiple papers spanning 1975-1990, Paul J. Werbos helped develop the idea of back propagation: making the algorithm modify most of its parts according to the errors it made (back propagating the error onto the weights, so to say). This advancement allowed neural networks to solve the famous exclusive or problem, whose crux is the impossibility to distinguish the answers based on one characteristic only.

Then, by 1989, the gigantic statement known as the Universal Approximation theorem had been proved by Cybenko1. What this theorem demonstrates, is that any pair of input and output, provided there is a function that links the two, can be approximated by a very simple neural network (although not very efficient). This idea hides behind the current hype as the strongest advantage of neural networks. In theory, it removes the need for hypothesis, for human input on how the relationship between cause and consequence should look like: given enough computing power, the network works the laws out by itself. This cannot be understated: the computer stops being a calculator and starts being a thinker. Certainly, given that machines are still far away from possessing the capacity of a human brain, there are some restrictions to offset this enthusiasm. This is where human input is necessary, designing clever networks in order to (strongly) nudge the machine towards the desired result.

In recent years, neural networks have gained traction, doubtlessly. Not because it has become easier, or because of the optimization done, but because the quantity of data available upon which to train the algorithms has reached tremendous amounts. And so there is a possibility to try and look into the possible correlations between any data we lay our hands on, as a sort of computer experiment. These experiments, however, are leveraging the data for seismic results: the forecasting abilities of a neural network are very real. For example, being able to predict whether a slump is coming can better protect company from overstocking. A language modeling algorithm may detect dangerous wordings in PR communications. These projects, thanks to these past years’ interest and developements, have become feasible and seamlessly integrable into a company’s infrastructure.

As of 2018, neural networks coding remains in the skillset of a select few: their applications are endless, and their efficiency almost unrivalled, and, while there has been a lot of progress to widen their appeal and throw together some user-friendly versions, the underlying theory requires a solid scientific background. The main pitfall being that a neural network cannot be considered useful until it has accomplished its training and shows promising results. During the learning phase, the algorithm may output far off results, and there is only one way one knows whether the network will fulfill its goals, aside letting it train for days, or weeks: understanding the data and being able to quantify what the goal is expected to look like. And so, while it takes a few lines of codes to launch such a machine, it takes years of education to properly know how to design it.

ARTIFICIAL INTELLIGENCE

2018 is indubitably the year of AI. This next step for machine learning represents something everyone can picture: human-like computers. Fundamentally, there is really no great difference between artificial intelligence and machine learning, as the former is in essence a machine that learns. The split exists because there is a conceptual gap between an algorithm that performs a task and a synthetic mind that provides answers. This human perspective on computers is well-founded, however: when carrying a critical duty out, anyone, in this decade, would never allow a machine alone whenever there would be the possibility of having a human alongside. Driverless cars, for example, are — and will be for a long time, indubitably — required to have someone to watch over the car’s driving. An AI, however is something you can trust. It has been designed to be aware, and while algorithms may be allowed to take care of important but tedious tasks, an AI may be given responsibility.

Peacock Solutions’ work on artificial intelligence is based upon responsibility. Knowing that because the product will be able to deliver much more than a simple algorithm, there is a fundamental need for trust. Trust that it can uphold this responsibility. As an aside, this may be used as a way to categorize whether what you’re looking at is AI: has it been given responsibility?

The subtlety that is important when considering true AI is the distinction between choice making and responsibility. An AI which advises is still an AI: it has been given the responsibility of giving sound and crucial advice for a company whose choices may sometimes depend on these particular recommendations. Building a system that is dependable requires that the AI be infused with the relevant knowledge, and, as we progress; machines learning more like humans, the need for the designers to understand themselves what the machine is trying to understand is unavoidable.

Even though machine learning is far from inactive as a research field2, AI comes as a perspective paradigm shift where ambition defines the new goals to follow.

OPPORTUNITIES

We have only entered the begginning of the information age, and yet: if a company exists, it owns data. Much like it owns headquarters, assets and liabilities, it possesses data as an instrument for benefit or as a weight to carry. Machine learning has made almost any task regarding the processing of this data feasible, allowing companies to get rid of dubious, anecdotal-evidence-based decision making. An obstacle widely seen is the complete chaos that often surrounds data gathering. Yet, while some human problems may only be solved through human solutions, most of the setbacks in the field disappear using a single well-made algorithm. Then, this stream of information becomes a powerful tool for any company, regardless of their field: whether it amounts to better knowing your customers, better knowing your products, or even staying aware of implicit currents within your business area.

AI takes this strength one step further, providing recommandations and turning data into insights. Decision-makers are better aided in their process, and for specific task can even delegate minor choices. The integration of artificial intelligence into business is the catalyst for a rational way forward. As we get better at analysing subtle relationships, and complex situations, we move our processes forward with us. However, there is only so much we can do without the proper tools: the ever growing reliance on data and information requires us to depend on new ones. The time has come to invest in tools for the brain.

1. Cybenko, G. (1989). Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems, 2(4), 303–314. https://doi.org/10.1007/BF02551274
2. Minar, M. R., & Naher, J. (2018). Recent Advances in Deep Learning: An Overview. ArXiv:1807.08169 [Cs, Stat]. https://doi.org/10.13140/RG.2.2.24831.10403