II.

Programming languages and creating software

One common way of defining a programming language is: "an artificial language built to allow someone to give instructions to a computer". Computers can’t understand English, Arabic or Chinese, and even though humans can technically learn binary (the base language of computers), almost no one does. That’s why we need some intermediate way of communicating, which we call programming languages.

As seen in section I, programmers have been creating programming languages and software solutions since the early days of computing systems. In this section, you will understand how programming languages have evolved and what you can do with them.

Low-level programming languages

Initially, in the early days of computers, assembly code and binary code were the main languages for communicating with computers to ask them to execute commands by translating data into ones and zeros; binary.

Machine code, also known as binary language, is a series of ones and zeros that represent a command for the processor of a computer (CPU). Assembly language is much more readable than binary language. Assembly uses mnemonic codes to refer to machine code instructions, rather than using the instructions' numeric values directly.

One may think that these languages are no longer important as they are invisible to most computer users, but the reality is that we still use these languages in our modern computers. They are the basics of any computer. Each CPU can execute a specific set of instructions that corresponds with the brand or architecture of that CPU.

But, why would you want to learn low-level languages and low-level programming in any case? There are many reasons, including:

  • Some parts of our operating systems, and even viruses, have been written in assembly.

  • If you want to work in GPU programming using high-level languages like CUDA or OpenCL, you’ll need to understand low-level programming.

  • If you want to get better at machine learning, you can optimise your code by using assembly language to manage the memory efficiently.

  • If you want to learn deeply how operating systems work, knowing assembly language will be helpful. Assembly language is typically used in a system's boot code, the low-level code that initialises and tests the system hardware prior to booting the operating system

  • Assembly language is useful in reverse engineering. Many programs are distributed only in machine code form which is straightforward to translate into assembly language by a disassembler, but more difficult to translate into a higher-level language through a decompiler.

After seeing how and why you could study more about low-level languages, we are ready to learn about high-level languages.

Pile of software language names
Pile of software language names

High-level programming languages

We know that a computer understands binary code but we don’t understand it ourselves – or only a few people in the world do.

In the late 50s, computer users (mostly scientists and large businesses) often had to write their software themselves. The disadvantage of this was that every business or lab had to have someone capable of programming the computer and the software was created for one specific computer system, making it impossible to trade it with others as they would not be compatible. Compilers were invented, and this supported the development of high-level programming languages, more abstract languages that are easier to understand.

Note

A compiler translates code written in one computer language to another computer language.

The first high-level languages developed in the 50’s were FORTRAN, COBOL and BASIC. They allowed programs to be specified in an abstract way, independent of the precise details of the hardware architecture of the computer. (Wolfram 2002).

These languages are used to write programs, which are complete and functional sets of instructions that computers use to accomplish tasks, like loading a web page, generating statistical analyses, and finding the sum of two numbers. However, the code is not recognised directly by the CPU. Instead, it must be compiled into a low-level language.

Since compiling large code could take a very long time, programmers invented interpreters.

Note

An interpreter directly executes instructions written in a programming language without requiring a compiler to compile them into a machine language program.

Some programming languages make use of both compilers and interpreters. If you were to write a Java program in a text editor, when you came to compile it with the Java compiler, you would actually be creating something called bytecode. Bytecode can be thought of as an intermediate stage between source code and object code. When a computer executes a Java program, library programs on that machine interpret the bytecode. This allows Java to be platform-independent – a user needs the correct runtime libraries for Java on their machine in order to execute the programs.

What are the differences between low-level and high-level languages?

The main difference is that programmers can more easily understand or interpret or compile the high-level language in comparison to machine code. On the other hand, machines can understand the low-level language more easily than human beings. Let’s see some other differences:

High-level languageLow-level language
Programmer-friendly languageMachine-friendly language
Less memory efficientHighly memory efficient
Simpler to debugComparatively complex to debug
Simpler to maintainComparatively complex to maintain
PortableNot portable
Can run on any platformMachine dependent
Needs a compiler or interpreter for translationNeeds an assembler for translation
Widely used for programmingNot commonly used nowadays in programming

Writing your own programs/software

There is a whole discipline dedicated to creating software (products) called software engineering. You will learn first a bit on software engineering, and later about programming languages.

What is software engineering?

When you think about software, you typically see an interface that allows you to do something with the computer, such as writing text. Software engineering is defined as a process of analysing user requirements (the needs that users have to write the text) to build the desired software product. Then it’s about designing, building and testing a software application that will satisfy those requirements.

Example

Let's look at the various definitions of software engineering:

  • IEEE, in its standard 610.12-1990, defines software engineering as the application of a systematic, disciplined, computable approach for the development, operation and maintenance of software.

  • Fritz Bauer defined it as "the establishment and used standard engineering principles. It helps you to obtain, economically, software which is reliable and works efficiently on the real machines."

  • Boehm defines software engineering as involving "the practical application of scientific knowledge to the creative design and building of computer programs. It also includes associated documentation needed for developing, operating, and maintaining them.”

But does software engineering involve more than coding?

Yes, there are many steps needed to create software, both before and after coding. We call this the Software Development Life Cycle (SDLC) and it is structured in a well-defined sequence of stages that makes the design and development process efficient. The steps are as follows:

  • Communication is the first step. Typically, a possible client of a software company initiates the request for a desired software product.

  • Requirement gathering is about trying to get as much information as possible on the client’s requirements.

  • The feasibility study is where the team comes up with a rough plan for the software process.

  • System analysis is where the project team analyses the scope of the project and plans the schedule and resources accordingly.

  • Software design is where the team takes the knowledge from the requirement and analysis phases and actually designs the software product.

  • The coding or programming phase is where the team starts to write program code in a suitable programming language and develops error-free executable programs efficiently.

  • Testing is an essential part of the process to discover and fix potential errors.

  • The integration phase is needed if the software should integrate with external entities like databases or other programs.

  • The implementation phase is when the new software is ready and actually installed on user machines.

  • Operation and maintenance is about confirming, in real life, the efficiency of the software. Possible errors are checked and fixed.

Waterfall versus agile development

All the above phases or activities in the SDLC can be executed in a different order according to different existing approaches. Also, different approaches invest more or less time into the different phases of the SDLC. Those stages can be carried out in turn, as in the waterfall approach or the stages may be repeated over various iterations that emphasise incremental delivery of the software product, as in the agile approach.

Comparison between waterfall and agile development
Comparison between waterfall and agile development

Traditional methods of software development utilise what is known as waterfall development. Before software updates could be easily downloaded from or automatically implemented on the internet, the waterfall development process was designed to try and ensure that when a software programme was shipped to the customer, it contained all of the features required with all the known problems tested and solved until the next version of the software programme was due to release. This process is high risk and time consuming because product testing is done at the end, after developers and designers have spent a huge amount of time designing and building the whole programme. This type of development process also favours engineering efficiency over end user experience, which can lead to problems not foreseen by the engineers causing frustration for end user customers, who are not involved in the development process after the initial requirement study. This end user frustration can result in potentially lost business or costly rebuilds.

Agile is a modern best practice for collaborative software development between teams and clients, by doing continuous planning, learning and communication to deliver incremental software instead of delivering it all at once at the end of the project. The end users (individuals who will actually be utilising the software) are at the centre of designing requirements and features, as well as being asked to test these in small increments throughout the project. This way, if a process error that the product was designed for becomes apparent, adjustments can be made immediately, before continuing to build. Breaking down the process into smaller parts and continuously testing and integrating the software features in batches spreads the risk of investment in development and speeds up the deployment of software to the users.

Now that we know the whole process behind the creation of software, we will go back to coding and programming languages.

Note

As you know by now, most computer programs are written in a high-level programming language; however, the human-readable version of a program is called source code. You and a software developer can create and edit source code in a high-level language using a software IDE or even a basic text editor.

What is a software IDE?

A software IDE stands for "integrated development environment” and is an application that developers use to create computer programs. In this case, "integrated" refers to the way multiple development tools are combined into a single program. For example, a typical IDE includes a source code editor, a debugger and a compiler. Most IDEs also provide a project interface that allows programmers to keep track of all files related to a project. Many support version control as well.

Some IDEs provide a "runtime environment” (RTE) for testing software programs. When a program is run within the RTE, the developer can track each event that takes place within the application being tested. This can be useful for finding and fixing bugs and locating the source of memory leaks. Because IDEs provide a centralised user interface for writing code and testing programs, a programmer can make a quick change, recompile the program, and run the program again. Programming is still hard work, but IDE software helps streamline the development process.

There are an incredible number of computer programming languages that are used by coders, software developers, web developers and other computer science professionals. But how many are there really?

According to Wikipedia, there are about 700 programming languages, including esoteric coding languages. Other sources that only list notable languages still count up to an impressive 245 languages. Another list, called HOPL, that claims to include every programming language to ever exist, puts the total number at 8,945. Some even estimate a total of up to 25,000.

But how can you choose a programming language to learn? And can you actually learn to code? The answer is yes! You can, and you should, as the need to be code literate for various jobs is increasing.

Carlcheo has created a useful infographic to help us choose which programming language to learn and has collected a good starting point to learn some of the mentioned languages.

  • If you want a language for your kids to learn, Scratch is recommended, and when you are done with it, you are recommended to move to Python.

  • If you want to learn a language to get a job at Facebook or Google, your best choice may be Python. And it happens that Python is a great language to choose in general, as it is considered one of the relatively easy languages to learn.

  • If you want to learn an "easy” language, according to developers your choices are Python, Ruby and JavaScript. These languages can provide you with a solid foundation in programming logic and syntax. And once you have a solid foundation, any other language will be easier to pick up.

  • If you want to develop games, C++ tends to be the language of choice.

  • If you want to code at a relatively low level, C and C++ are your choice as they tend to be compiled directly to the machine language of the platform being used. Also, C and C++ allow you to write in a way that is quite close to most machine code (incrementing pointers, etc.). Rust is a newer language in this space.

  • If you want to work on iPhone projects, thus iOS-related projects, your choice is Swift.

  • If you want to work on Android-related projects, your choice is Java or Kotlin.

  • If you are attracted by the beauty of websites, the chances are high that you will find it interesting to learn user-facing code (front-end web development) and your language of choice will be JavaScript.

  • If you are attracted by servers (back-end web development) and databases, your language of choice may be Ruby or Python.

  • If you already know the part of the technical stack that interests you, you will be choosing for front-end development or back-end development.

And our final advice: As there are hundreds of options to pick a language, it is a good idea to ask yourself two key questions before making any choice.

  • What is it that made you interested in programming?

  • What do you want to do as a programmer?

There are plenty of opportunities to boost your career. Use the first language to learn how to think like a programmer and learn basic programming logic. And don’t forget that lifelong learning is essential to keep up with language and technology trends.

Next section
III. Understanding and using the software on our devices