Digital industry goes back more than half a century. Software development began to gain momentum in the 70s and 80s, but with the development of Internet technologies it has reached a new level, now we are almost completely surrounded by various digital devices that are controlled by at least a few lines of program code. Artificial Intelligence solutions will be a similar leap forward as the development of the Internet ; programs might be written by programs in the future…

The stages of software development

Software is nothing more than a series of commands that a computer (hardware) executes. Softwares are always created for a special task, a problem arises, and we create a solution.

The following are the stages of development:

  1. Defining the problem: exactly what do I want to solve with the new software?
  2. Determining the required subsets/sub-components/

In almost all software, three main components can be distinguished:

Database: the collection of data from which the software works.

Back-end: the engine of the software, the code units running specific activities

Front-end: an interface that allows you to manage and integrate the software into your system

3. Breaking down the main problem into sub-problems then ordering the responsible program parts’ preparation.

Organizing the stages of work is especially important in software development, as different parts are constantly interacting with each other. Inaccurate organization can make it impossible to achieve the desired operation.

4. Walking Skeleton

Minimal operation, ground zero: it knows the basics but nothing else.

5: Start testing and version tracking

During testing and the addition of additional features, there are several different parallel versions, the complexity of software development is further complicated here. There are several correlating versions, which eventually will come together into a final version.

6: Beta

Beta is the state when the product is assembled and conceptually ready, after that the priorities are the elimination of errors and further optimization.

7. Final version

The finished, but not so final product. The software can always be improved, but new improvements beyond this point are regarded as new versions.

The history of software development is also the history of programming languages

proogramming languages evolution tree
History of programming languages [https://www.thesoftwareguild.com/blog/history-of-programming-languages/]


The different programming code systems are constantly ‘evolving’, individual programming languages are developed as we speak. Furthermore, completely new systems emerge relatively often. In most cases, we group programming languages in software development by their function. We use certain languages for specific tasks, for that reason languages ‘wear out’ over time as they outlive their usefulness.

One of the oldest programming languages for machine instructions is C. It has existed since 1972 and is still in use today, there are several versions of it, and many newer programming languages have been derived from it. Html was essential for displaying web pages, and php offered background solution for the web.  To work with larger databases on web tools, we needed SQL. JavaScript was created for web browser softwares in the mid-90s, and it became one of the basic languages of web programming.

Software left computers soon enough and appeared in countless different applied systems and devices with the spread of digital technology. For that reason, a need arose for programming languages that can run on a wide variety of platforms, on any device and in any enviroment. JAVA offered a solution and quickly became one of the most popular programming languages in the world.

If we speak about current generation languages, we cannot ignore Ruby, born in the mid-90s it played a significant role in the development of web applications. We should also mention Swift, a 2014 open-source language created by Apple, available for anyone who wants to develop software for apple.

Last but not least, the famous Python. It is a “high-level” language, meaning it can be used for almost anything, although it is not particularly outstanding in anything. In the last 30 years it has become unavoidable having quite a fuss about it, for it being one of the basic languages of Machine Learning along certain versions of the good old C.

Of course, this is just an ad hoc list. there are hundreds of working programming languages today, and you could argue for god-knows-how-long about their usefulness.

What software does Lexunit use?

Machine learning / data science libraries used by Lexunit
Machine learning / data science libraries used by Lexunit

Nowadays, a development team like Lexunit knows a wide variety of programming languages and often uses a number of them too. In the following part, I will mention a few according to their task details.

React.js- This is a JavaScript library that is useful for designing user interfaces (UI)

TypeScript- This is also JS, a special development that speeds up work on really complex projects

Node.js- Another JS development environment, we test in this for our web projects.

Python- We use this versatile programming language for a variety of things, including our Machine Learning projects, which usually use this programming language to write programs for the statistical analysis of the necessary parameters.

Go°- A remote descendant of C, a programming language used to develop complex systems, adapted to our age.

The near future - Programming programs?

With the development of programming languages, it has long been experienced that certain languages have become capable of automatically performing processes that previously had to be programmed. Programs can also run programs, even loading a simple web page require code snippets written in up to 5-6 different programming languages. These are web-based applications, working in the background to deliver any given content in their creator-intended way.

The so-called “IDEs” (Integrated Development Environment) are complete development environments in which other programs can be created without having to write the code from scratch. These are not necessarily systems that require the most serious professional skills, in fact, one of their main goals is to make complex operations feasible on simple - even visual - surfaces. For instance, you can already meet Microsoft Visual C at basic IT and computer training courses.

In these software, you can edit the source code of programs, run the programs in a separate environment, and “debug,” that is, start automatic debugging. Moreover, software development and machine learning can be well coordinated in such systems. There are writing apps that have some sort of “AI”. You could encounter in online word processors and in Gmail the software recognizes the most likely next word you want to write and offers to type that word at the touch of a button. The IDE algorithm might offer to add complete program code details into the code we are working on.

Hive Mind in the Sky: Complex Programming Environments in the Cloud

In another post, we looked at the concept of the “cloud” and described why the complex cloud applications of large tech companies like Azure or AWS are so important now. For hardware-intensive activities, we can “rent” capacities on these platforms.

There are opportunities to outsource activities that would have been difficult to imagine even 2-3 years ago. The development of these cloud services is perhaps the most significant development in the entire software industry in the last few years. Sooner or later, these cloud systems will also be able to provide more and more help in the field of software development. This is a two-way process: On the one hand, many tasks that previously required more serious developer knowledge may become unimaginably simple, becoming accessible to those without special qualifications. This process has been going on for some time now. Today you really don’t need classic programming, web development knowledge to put together an acceptable website and webshop, even with a payment system, social media integration and other features. It’s not effortless, but you don’t even need to add years of learning. These modular, goal-oriented, end-user-designed development solutions may become even more effective in the future if the application of machine learning reaches the right momentum here as well. On the other hand, of course, the efficiency of professional development teams can also take levels with smart AI: the coding process speeds up, there will be fewer errors, more and more tests can be run and focus on experimentation.

While it’s not new that technology is always evolving, right now the entirely new technology of artificial intelligence solutions has begun to weave through the industry. It is likely that the original purpose of the software -problem solving- will reach new levels leaving the current one for everyday tasks. Think about it! It’s not only about using artificial intelligence in our software to solve problems, but about using it in software development and design. When this becomes reality even the end users will feel the full-blown power of AI….

Software 2.0

If systems are able to teach themselves, their efficiency only depend on capacity. In many cases, it is possible to reach levels that are humanly unattainable and incomprehensible. If a software is constantly able to do the same thing more and more efficiently, their development will lead us to uncharted waters. The application of artificial intelligence technologies is completely different from classical programming. As already mentioned, no one knows exactly what happens in the depths of the neural network. For a while, we humans can shape the edges of the neural network by weighting the process, but the truth is that it seems much more elegant to focus on the outcome. OK, but what do I mean by that?

Andrej Karpathy, Director of Tesla’s AI Division said this approach allows us to articulate a specific goal for the program. Let’s say the goal is to win a Go game. We create a program in the neural-web architecture, but at this point it’s only on the skeleton level. After that, the web takes over and identifies the ‘program space’, a set of possibilities in which the self-learning web will then find better and better program variations.

This is all possible, as we live in the Big Data era where it is easier to collect data, identify characteristics of desirable operations (a self-driving car stopping at the red light) than to write a program on it. If we show enough examples of cars stopping at the red light, the car will learn the concept of stopping at the red light and execute it, even if we don’t know exactly how it decides to stop. The task of the “Data Scientist” is to categorize, sort, parameterize, and refine the available data so that they drive the neural network at the highest octane fuel possible.

Let's look at some specific examples of this process, which is almost impossible to imagine:

Machine Vision- Recently, this was a labor demanding engineering task, combined with a little machine learning. Recently it has been found that ‘teaching’ with a huge amount of images produces better results than any engineering solution that strives for accurate imaging.

Machine Translation- The translation robot’s operation has been based on phrases and elements for a long time, but neural networks are proving to be better and better.

Games- Not exactly practical, but recent AI experiments are starting to get nice results in extremely complex games like StarCraft. Here, too, the making them process of plenty of winning matches bear fruit. After a while, the network figured out unusual yet effective solutions and caught human players off guard. We can use the experience gained in games in countless areas.

In addition, neural networks have the upper hand in their high degree flexibility and adaptability. If, for some reason, it was important to double up a process with some quality degradation, it wouldn’t be easy to solve with a classic software. From the neural network we remove half of the channels, retrain, and are done.  Still, it’s not a Swiss army knife miracle weapon. The peculiarity of the system is that it is based on learning, so it’s defenseless against biased data. Due to human error, it can sink into the deepest depths, resulting in things like the chatbot that became racist in the blink of an eye.

The other problem is that we will never know exactly how a neural network makes a decision, so it will not offer a reassuring solution where the whole chain of cause and effect is inevitably important, where a demonstrable, replicable process is needed. As Karpathy puts it, “in many cases we will be forced to decide between two systems where one is 90% accurate but we understand how it works and the other is 99% accurate but we don’t understand how it works”.

All in all, “software 2.0” can become the dominant solution in situations where we deal with a large number of repeated, easily solved evaluations, and building an algorithm with a “1.0” philosophy would be too complicated.