Is Codex a future addition for professional programmers or a threat?

According to veteran programmers who tested it, the artificial intelligence technology that can generate programs in 12 different coding languages will not replace humans.

What is Codex?

Codex is an AI system that can translate natural language to programming code. It was developed by OpenAI, one of the world’s most ambitious research labs.

About 4 years ago, researchers at labs like OpenAI started designing neural networks that analyzed enormous amounts of prose. By pinpointing patterns in all that text, the networks learned to predict the next word in a sequence. The researchers also observed that the system they built could even write its own computer programs, short and simple in the beginning, learning to do so from an untold number of programs posted to the internet.

Is Codex a threat to programmers?

Professional programmers, such as Tom Smith or Ania Kubow, tested the technology, searching for the answer to this very question. Their findings? After several weeks working with this new technology, Smith believes it poses no threat to professional coders. In fact, like many other experts, he sees it as a tool that will end up boosting human productivity. It may even help a whole new generation of people learn the art of computers, by showing them how to write simple pieces of code, almost like a personal tutor.

“This is a tool that can make a coder’s life a lot easier,” Smith said.

Codex can generate programs in 12 computer languages and even translate between them. But it often makes mistakes, and though its skills are impressive, it cannot reason like a human. It can recognize or mimic what it has seen in the past, but it is not nimble enough to think on its own. So it looks like Codex extends what a machine can do, but it is another indication that the technology works best with humans at the controls.

“AI is not playing out like anyone expected,” said Greg Brockman, chief technology officer of OpenAI. “It felt like it was going to do this job and that job, and everyone was trying to figure out which one would go first. Instead, it is replacing no jobs. But it is taking away the drudge work from all of them at once.”

For more details on the topic: https://medium.com/the-new-york-times/ai-can-now-write-its-own-computer-code-thats-good-news-for-humans-661fe86b85af

The world is dealing with an IT crisis and it’s no secret about it – the semiconductor crisis.

Semiconductors act as the brains that power our technological devices. These chips, now smaller than a stamp and thinner than a piece of hair, have revolutionized the modern world. Innovation in the field has led to smarter, faster, and smaller technology (think pacemakers, smartphones, solar energy, self-driving cars, laptops, airplanes, just about everything you use). They’re also the second largest export in the U.S. and are responsible for 2 million American jobs.

The recent shortage of semiconductors sent American companies, as well as companies around the world, who usually rely on a lean inventory of the chips, into crisis.

US companies like GM and Ford have announced that they’re temporarily shutting down plants because of a semiconductor shortage. This led authorities to assess the semiconductor manufacturing capacities stateside, especially as the United States are relying on semiconductor as the building blocks of their digital economy. And right now, they don’t seem to be producing enough.

In the US, in February 2021, President Joe Biden issued an executive order to review America’s industrial supply chain, partially to assess why there was a shortage of production in the United States (microchips included). Federal authorities are considering more R&D measures to bring the whole supply chain home.

This is because U.S. semiconductors accounted for half of all global sales, or about $193 billion, in 2020. But only 12% of those chips are actually manufactured in the U.S. So while the U.S. still leads in design, supply-chain issues have become a problem and gives China, for example, a lot of leverage over the U.S.

With the start of the COVD-19 pandemic at the early stages of 2020, the supply chains for consumer electronics were under enormous pressure: people around the world had to find new ways to work and play.

The car industry was forced to shut down factories during lockdown which led to cancelled chip orders. But what they did not take into account, when forecasting a lower demand for the rest of the year due to the pandemic, was the faster-than-anticipated bounce back. This sent semiconductor supply chains into a downward spiral, creating a shortfall in the tiny electronics across a wide swathe of industries.

While car companies like General Motors, Ford Motor and Volkswagen were forced to temporarily shut down production lines and thus cancelling the chip orders, chip foundries like Taiwan Semiconductor Manufacturing Corp (TSMC) reassigned their production capacity for the remainder of 2020 to companies making smartphones, laptops and gaming devices, which were experiencing a surge in demand during the lockdowns. But when car sales increased in the third quarter, chip factories could not meet the high demands and could not respond fast enough.

What was the consequence?

Other industries, especially IT and telecom, experienced a spike in sales due to pandemic’s “stay at home” effect but were facing the same challenge: they found themselves unable to secure adequate supplies to meet the increased demand.

Apple reported, for instance, that the shortage in semiconductors will incur a cost of US$3 billion to US$4 billion in its financial third quarter to June, with the biggest impact felt on Mac and iPad products. Midea Group, the world’s largest maker of white goods like refrigerators, washing machines and air conditioners, said the prices of chips used for home appliances are set to increase as the global shortfall persists.

Xiaomi Corp recently increased the prices on some of its TV models, citing higher prices in key components, while Samsung Electronics and Sony have also raised prices on a range of products.

How long will this crisis last?

As Taiwanese semiconductor companies have boosted production in China, it seems like the semiconductor shortage that has gripped the world could last well into 2022. Intel, the semiconductor giant, on the other hand, warned on July 23 that the shortages can extend into 2023.

You can read more about the topic here:

https://www.scmp.com/tech/tech-war/article/3133061/why-there-global-semiconductor-shortage-how-it-started-who-it-hurting

https://www.trtworld.com/business/global-chip-shortage-to-hit-smartphone-market-next-48649

https://fortune.com/2021/07/16/biden-administration-sounds-the-alarm-on-the-semiconductor-crisis/

 

 

 

What comes next after Kubernetes in app-infrastructure?

Jonas Bonér, CTO and co-founder at Lightbend, said, there is a huge gap between the infrastructure and building a full application. This means that, in the near future, they will need to add more tools in the toolbox and to extend the infrastructure model of isolation into the app itself, creating a powerful, yet simple, programming model.

But when a technology has reached a certain level of trust, it’s well-understood and easily managed, you can say that it is ”boring”, thus paying it the best compliment there is. Kubernetes has become just that: it is a standard cloud-enabling plumbing that ”works.”

Tesla, for example, relies on “digital twin” capabilities that power its electric grid, capabilities made possible by the combination of Akka and Kubernetes. Colin Breck, a Tesla engineer, says ”The majority of our microservices run in Kubernetes, and the pairing of Akka and Kubernetes is really fantastic”.

What are the unsolved areas on the cloud-native stack, that are evolving above Kubernetes? According to Boner, there are three: application layer composition, stateful use cases, and data-in-motion use cases.

Stateful use cases

Most of the cloud ecosystem is mainly tackling so-called 12-factor style applications. In the cloud, you’re forced back to the three-layer architecture of pushing everything down into the database every time. This happens unless you have a good model and the tools supporting it.

“The value is nowadays often in the data, and it’s often in the stateful use cases that most of the business value lies — making sure you can access that data fast, while ensuring correctness, consistency, and availability.” Boner says.

Data-in-motion use cases

The Kubernetes ecosystem doesn’t yet offer great support for streaming and event-based use-cases.

“Serverless gets us closer to addressing the problem of extending the model of Kubernetes into the application itself. That’s what it’s all about. Abstracting away as much as possible, and moving to a declarative model of configuration rather than coding, where you define what you should do and not how you do it.” said Boner.

Application layer composition

“People too often use old tools, habits, and patterns, often originating from the traditional (monolithic three-tier) designs that inhibit and constrain the cloud model delivered by Kubernetes,” Bonér says. So what needs to be done is to extend the model of containers, service meshes, and orchestration all the way up to the application/business logic. This way, we will leave the developer with the essence: the business logic and its workflow.

 

All in all, ss the cloud-native stack continues to evolve above the Kubernetes infrastructure level, it will be interesting to see how these concepts play out to serve specific language developers.

Read more on the topic here: https://www.infoworld.com/article/3567648/what-comes-after-kubernetes.html

 

In 2021 it seems we’ve had 2 constants: the world living with the coronavirus pandemic and a steady flow of tech acquisitions.

Global tech merger & acquisitions deals last year totaled $634 billion, a 91.8% year-over-year increase, according to GlobalData. And some of the big deals were the $35 billion acquisition of Xilinx by Advanced Micro Devices and Salesforce’s $27.7 billion acquisition of Slack.

But they’re not the only ones worth looking at. We’ve selected some other tech acquisitions that took place this year which will most likely reshape de tech environment as we know it.

IBM delves into observability for customers

IBM announced the acquisition of Turbonomic at the end of April.

Turbonomic specializes in Application Resource Management (ARM) and Network Performance Management (NPM) software. Turbonomic uses machine learning to spot application performance issues and optimize underlying resources. This applies to containers, VMs, servers, storage, networks, and databases.

This acquisitions will help IBM offer a greater range of AIOps and observability options for customers. This will happen particularly through its IBM Cloud Pak for Watson AIOps.

Microsoft boosts its upstream open-source contributions

Microsoft made a move to boost its capabilities in the Kubernetes space with the acquisition of German firm Kinvolk. This also took place in late April.

Founded in 2015, Kinvolk has been building enterprise-grade tools to help developers adopt cloud-native technologies. These technologies are Kubernetes, Flatcar Container Linux, as well as the Lokomotive and Inspektor Gadget projects.

Microsoft expects to integrate the Kinvolk team and technology into the team responsible for its managed Azure Kubernetes Service (AKS). This will boost Microsoft’s upstream open-source contributions.

UiPath takes step forward for its enterprise-ready platform

On March 23rd, RPA vendor UiPath made an addition of its own, picking up the Denver, CO-based firm Cloud Elements.

Cloud Elements specializes in API integration, similar to Mulesoft and Apigee, which are now part of Salesforce and Google, respectively.

For UiPath, a Romanian-born start-up, this capability could allow customers to better link processes that span various enterprise systems to build more effective automations.

SAP digs deeper into cloud-native enterprise intelligence

German software firm SAP announced it’s acquiring fellow German firm Signavio, which specializes in cloud-native enterprise business intelligence for processes and management, in late January.

This acquisition will add Signavio-designed solutions to the bundle of existing SAP software and services aimed at offering customers “business transformation-as-a-service”.

SAP will aim to use Signavio’s expertise around business process intelligence to help more customers optimize these processes as they become more digital.

Qualcomm strengthens semiconductors market position

No doubt about it: 2020 brought a burst in semiconductors consolidation. So hot on the heels of this phenomenon, Qualcomm announced it was acquiring Nuvia in early January. This 2021 tech acquisition led the way to the upcoming M&A operations of the year.

Nuvia was founded by a team of Apple engineers and makes high-performance CPU chips.

Together, the two companies will be positioned to deliver a new class of products and experiences for the 5G era.

You might also find interesting: RPA usage in SMEs, on the rise in Eastern Europe

A study was recently conducted among small and medium-sized enterprises in Eastern Europe. The subject: the adoption, dissemination and evolution of RPA in business processes.

For these companies, RPA has become increasingly important during the Covid-19 pandemic. Also, for a successful remote work flow.

More to the point, almost 1 in 4 small and medium-sized companies in Eastern Europe confirm that they have a need for software robots, according to a survey published on a business portal.

RPA can help boost efficiency and lower costs

Almost half (47,4%) of the companies believe that intelligent software robots are useful to eliminate repetitive actions.

At the same time, almost 30% of companies believe that the use of software robots can lead to an increase in the efficiency of remote work.

Just as important, companies believe that by integrating RPA technologies they will get lower costs (31,6%) and increased sales (18,4%).

In addition, 39% believe that the team would benefit from the support of RPA technology to leave boring and repetitive tasks to intelligent software robots.

You might also find interesting: Coding activates the brain differently from maths

The dangers of RPA

Only 8% of Eastern European SMEs believe that their teams may be reluctant to integrate technology into the company’s operational processes.

In addition, 29% believe that there may be some fears among employees that software robots could replace certain positions in the company.

High costs, yet viable investment

RPA is an industry that has accelerated strongly in recent years. In Eastern European countries such as Romania, there’s been a rise in global providers of solutions based on this technology.

More to the point, 1 in 4 companies in this area are already using automation technologies. Half of them say they are considering integrating RPA technology into their business this year.

The main hurdle of RPA induction? 1 in 2 companies says the high cost. 1 in 3 companies

believe they will face a lack of training of employees on the adoption and use of such a technology.

Neuroscientists from MIT have discovered that brain activity while coding differs from processing language or doing mathematics.

Coding is matched by many with learning a new foreign language. And, granted, there are certainly many similarities. To the brain itself, however, it seems to be quite different.

Researchers took fMRI brain scans of young adults in a small coding challenge, using both Python and visual programming language ScratchJr. The purpose was to see what parts of their noggins lit up.

Almost no response was seen in the language processing parts of the brain.

Instead, it appears that coding activates the ‘multiple demand network’ of our brains. This area “is also recruited for complex cognitive tasks such as solving math problems or crossword puzzles.”

Yet when solving maths problems directly, slightly different brain activity patterns emerge.

The multiple demand network is spread throughout the frontal and parietal lobes of the brain. Previous studies have found that math and logic problems dominate the multiple demand regions in the left hemisphere. Tasks involving spatial navigation lean on the right hemisphere more than the left.

Coding activates both the left and right sides of the multiple demand network. This counters the belief that it causes the same brain activity as maths. One interesting fact: ScratchJr activated the right side slightly more than the left.

You can find a full copy of the study here: https://www.biorxiv.org/content/10.1101/2020.04.16.045732v2.full.pdf

You might also find interesting: https://www.vonconsulting.net/ai-automated-coding/

Guido von Rossum launched Python on February 20th, 1991. Python is known as an incredibly versatile language. It is used in developing some of the most popular web applications, from Instagram to Dropbox.

At the same time, it is a gateway language for many in the world of software development.

Moreover, it is frequently taught to schoolchildren and people worldwide who lack any prior programming experience.

Read more details here: https://www.vonconsulting.net/study-python-is-the-top-programming-language-of-2020/

One reason for the popularity of this programming language lies in its simplicity. Its users do not need to understand compilers or assemblers. They also don’t need to understand other tiny details programming languages require.

Feedback is instant, and Python is improving all the time. In addition to its popularity among entry-level users, Python is rapidly becoming a priority within the business environment. It has also found favor for serving as the ‘gluing language’.

Large development projects always have a trade-off between scale and speed. The typical software stack that a large organization uses every day may include code written in several different languages. Moreover underlying data may be stored in numerous formats, languages, and locations.

In such environments, Python has taken root as a subtle, but powerful way to bridge between different applications and code libraries.

When Python is used as gluing code in compiled languages, development cycles are shortened. Results are made more interactive and are quicker to observe. At the same time, the delays caused by things such as long compile times are eliminated.

You might also find interesting: https://www.vonconsulting.ro/tech-trends-in-pandemic-times/

Researchers from MIT and Intel have created AI automated coding. Its name? MISIM, an algorithm that can create algorithms. What does that mean for software developers?

For most of us, writing code is like learning a foreign language. But no more! A team of researchers from MIT and Intel are looking to change all that by building AI automated coding.

The new technology is named MISIM (Machine Inferred code Similarity). MISIM studies snippets of code to understand what a piece of software intends to do. It uses a pre-existing catalogue of codes and it can understand the intent behind a new code.

Will this actually help software developers? The Intel-MIT team says yes. MISIM will help developers working on software by suggesting other ways to “attack” a program. MISIM will also aid them in offering corrections and options that will make the code more efficient.

The principle behind MISIM is not new. Technologies that try to determine whether a piece of code is similar to another one already exist. They are used by developers, but they focus on how code is written and not on what it intends to do. MISIM can act like a recommendation system. It suggests different ways to perform the same computation – that are faster and more efficient.

Software development becomes more and more complex. Technologies such as MISIM could have a significant impact on productivity. This was the opinion of Justin Gottschlich, the lead for Intel’s machine programming research team.

More details about the MISIM algorithm here: https://www.zdnet.com/article/software-developers-how-plans-to-automate-coding-could-mean-big-changes-ahead/

You might also find interesting: https://www.vonconsulting.net/study-tech-trends-in-pandemic-times-1-in-4-people-learned-to-code-during-lockdown/

Although it was created 30 years ago, Python seems to be holding firm ground in the top programming languages of 2020, according to a study cited by www.developer-tech.com.

Ideally suited for artificial intelligence and web development, still considered easy to learn and taught in many universities worldwide, Python ranked 100 points in a survey meant to identify not only programming languages preferences, but also programmers’ unique needs and interests.

Java takes second place

Newer – and younger – than Python, Java was created 25 years ago and is known for its versatility with the language powering mobile, desktop, and web applications/games.
It surges in programmers’ preferences and usage ahead of Ruby, R, Arduino – to name but a few – and it is still a popular option for the world’s most used mobile operating system, Android, despite intense lobby for Java to be replaced by Kotlin.
Java ranked in at 95,3 points of the scorecard.

Just 1 point behind Java – C

C is the oldest of the top three languages ranking in this survey. C was created 48 years ago, in 1972, and continues to be the language primarily used for system development work, such as drivers and operating systems, but also applications that require large amounts of calculations.

C and C++ ranked in the study afore mentioned at 94,6 points and 87 points respectively.

The R-ise of R

Universities and research institutes embrace Python and R for their statistical analyses. And now more than ever, lots of statistics and data mining need to be done to find a vaccine for the Covid-19 virus. As a consequence, statistical programming languages that are easy to learn and use have gained noticeable popularity now.

A statistical language, R saw an interesting increase in another ranking, comprised in the TIOBE index, just behind Visual Basic, with the top 5 being held by C, Java, Python, C++, C#.

 

Article brought to you by VON Consulting Tech Division. People. Quality. Tech.
VON Consulting Tech Division is a start-up operating also in Düsseldorf, Germany, which provides hardware design and verification services, IT support and software development for customers in different industries, mainly in IT, telecom, and networking and semiconductors industries. See more on http://www.vonconsulting.net.

Doors shut on many plans during the Covid-19 pandemic. Plans turned to day-by-day approaches, particularly where daily livelihood was concerned and job stability had to navigate mass layoffs and furloughs in some fields.

Yet, despite this rather asperse scenery, the following phenomenon happened: employees turned to learning new skills to keep their leverage on the working market, as well as to garner a new feeling of personal development. Case in point: technology proficiency.

The golden top 3 podium: Python, Java and C++

According to a study that collected data in August 2020 from more than 1,000 people in the United States, which is cited by www.developer-tech.com, around 1 in 4 people spent time learning coding languages during the lockdown.
The most commonly learned programming language were Python, followed by Java and C++.

Millennials, most engaged with new tech trends

70% of the study respondents said their technology skills moderately or greatly improved since the Covid-19 breakout. Breaking it down by generation groups, millennials, at nearly 3 out of 4 respondents, were the most likely to have improved their tech skills with Generation X not far behind.
Baby boomers were considerably less likely to report any tech improvement; still, over half said they were more skilled now than they were before the Covid-19 pandemic.

Biggest motivation to learn code: career development

The greatest motivations for people setting out to improve their skills were career development (55%), personal development (46%), and improving job search prospects (33%).

Online e-zines, online channels, and mostly freely available content, was the top source of training material for most (66%) people boosting their skills, with 1 in 3 turned to paid resources.

On average, people spent 7,2 hours per week improving their tech skills, the most time learning coding and programming languages, while improving telecommunication proficiency required the least study time.

One other interesting aspect to consider: people who had taken advantage of employee-provided training opportunities were much more inclined to pursue development on their own or through paid resources.
Over one-third of respondents (37%) whose employers didn’t offer technology education opportunities reported wishing their employer would do so.

Overall, close to 1 in 2 respondents believe their new or improved tech skills will be very or extremely beneficial to their career.

Article brought to you by VON Consulting Tech Division. People. Quality. Tech.
VON Consulting Tech Division is a start-up from San Diego, CA, which provides hardware design and verification services, IT support and software development for customers in different industries, mainly in IT, telecom, and networking and semiconductors industries. See more on http://www.vonconsulting.net.