Computer Engineering

Computer engineering is a discipline that integrates several fields of electrical engineering and computer science required to develop computer hardware and software. Computer engineers usually have training in electronic engineering (or electrical engineering), software design, and hardware-software integration instead of only software engineering or electronic engineering. Computer engineers are involved in many hardware and software aspects of computing, from the design of individual microcontrollers, microprocessors, personal computers, and supercomputers, to circuit design. This field of engineering not only focuses on how computer systems themselves work, but also how they integrate into the larger picture.

Usual tasks involving computer engineers include writing software and firmware for embedded microcontrollers, designing VLSI chips, designing analog sensors, designing mixed signal circuit boards, and designing operating systems. Computer engineers are also suited for robotics research, which relies heavily on using digital systems to control and monitor electrical systems like motors, communications, and sensors.

In many institutions, computer engineering students are allowed to choose areas of in-depth study in their junior and senior year, because the full breadth of knowledge used in the design and application of computers is beyond the scope of an undergraduate degree. Other institutions may require engineering students to complete one year of General Engineering before declaring computer engineering as their primary focus.

History

The first computer engineering degree program in the United States was established at Case Western Reserve University in 1972. As of 2015, there were 238 ABET-accredited computer engineering programs in the US. In Europe, accreditation of computer engineering schools is done by a variety of agencies part of the EQANIE network. Due to increasing job requirements for engineers who can concurrently design hardware, software, firmware, and manage all forms of computer systems used in industry, some tertiary institutions around the world offer a bachelor’s degree generally called computer engineering. Both computer engineering and electronic engineering programs include analog and digital circuit design in their curriculum. As with most engineering disciplines, having a sound knowledge of mathematics and science is necessary for computer engineers

Work

There are two major specialties in computer engineering: software and hardware

Computer software engineering

Computer software engineers develop, design, and test software. Some software engineers design, construct, and maintain computer programs for companies. Some set up networks such as “intranets” for companies. Others make or install new software or upgrade computer systems. Computer software engineers can also work in application design. This involves designing or coding new programs and applications to meet the needs of a business or individual. Computer software engineers can also work as freelancers and sell their software products/applications to an enterprise/individual.

Computer hardware engineering

Most computer hardware engineers research, develop, design, and test various computer equipment. This can range from circuit boards and microprocessors to routers. Some update existing computer equipment to be more efficient and work with newer software. Most computer hardware engineers work in research laboratories and high-tech manufacturing firms. Some also work for the federal government. According to BLS, 95% of computer hardware engineers work in metropolitan areas. They generally work full-time. Approximately 33% of their work requires more than 40 hours a week. The median salary for employed qualified computer hardware engineers (2012) was $100,920 per year or $48.52 per hour. Computer hardware engineers held 83,300 jobs in 2012.

Specialty areas

There are many specialty areas in the field of computer engineering.

Coding, cryptography, and information protection

Main article: Information security

Computer engineers work in coding, cryptography, and information protection to develop new methods for protecting various information, such as digital images and music, fragmentation, copyright infringement and other forms of tampering. Examples include work on wireless communications, multi-antenna systems, optical transmission, and digital watermarking.

Communications and wireless networks

Main articles: Communications networks and Wireless network

Those focusing on communications and wireless networks, work advancements in telecommunications systems and networks (especially wireless networks), modulation and error-control coding, and information theory. High-speed network design, interference suppression and modulation, design and analysis of fault-tolerant system, and storage and transmission schemes are all a part of this specialty.

Compilers and operating systems

Main articles: Compiler and Operating system

This specialty focuses on compilers and operating systems design and development. Engineers in this field develop new operating system architecture, program analysis techniques, and new techniques to assure quality. Examples of work in this field includes post-link-time code transformation algorithm development and new operating system development.

Computational science and engineering

Main article: Computational science and engineering

Computational Science and Engineering is a relatively new discipline. According to the Sloan Career Cornerstone Center, individuals working in this area, “computational methods are applied to formulate and solve complex mathematical problems in engineering and the physical and the social sciences. Examples include aircraft design, the plasma processing of nanometer features on semiconductor wafers, VLSI circuit design, radar detection systems, ion transport through biological channels, and much more”.

Computer networks, mobile computing, and distributed systems

Main articles: Computer network, Mobile computing and Distributed computing

In this specialty, engineers build integrated environments for computing, communications, and information access. Examples include shared-channel wireless networks, adaptive resource management in various systems, and improving the quality of service in mobile and ATM environments. Some other examples include work on wireless network systems and fast Ethernet cluster wired systems.

Computer systems: architecture, parallel processing, and dependability

Main articles: Computer architecture, Parallel computing and Dependability

Engineers working in computer systems work on research projects that allow for reliable, secure, and high-performance computer systems. Projects such as designing processors for multi-threading and parallel processing are included in this field. Other examples of work in this field include development of new theories, algorithms, and other tools that add performance to computer systems.

Computer vision and robotics

Main articles: Computer vision and Robotics

In this specialty, computer engineers focus on developing visual sensing technology to sense an environment, representation of an environment, and manipulation of the environment. The gathered three-dimensional information is then implemented to perform a variety of tasks. These include, improved human modeling, image communication, and human-computer interfaces, as well as devices such as special-purpose cameras with versatile vision sensors.

Embedded systems

Examples of devices that use embedded systems.
Main article: Embedded systems

Individuals working in this area design technology for enhancing the speed, reliability, and performance of systems. Embedded systems are found in many devices from a small FM radio to the space shuttle. According to the Sloan Cornerstone Career Center, ongoing developments in embedded systems include “automated vehicles and equipment to conduct search and rescue, automated transportation systems, and human-robot coordination to repair equipment in space.”

Integrated circuits, VLSI design, testing and CAD

Main articles: Integrated circuit and Very-large-scale integration

This specialty of computer engineering requires adequate knowledge of electronics and electrical systems. Engineers working in this area work on enhancing the speed, reliability, and energy efficiency of next-generation very-large-scale integrated (VLSI) circuits and microsystems. An example of this specialty is work done on reducing the power consumption of VLSI algorithms and architecture.

Signal, image and speech processing

Main articles: Signal processing, Image processing and Speech processing

Computer engineers in this area develop improvements in human–computer interaction, including speech recognition and synthesis, medical and scientific imaging, or communications systems. Other work in this area includes computer vision development such as recognition of human facial features.

Education

Most entry-level computer engineering jobs require at least a bachelor’s degree in computer engineering. Sometimes a degree in electronic engineering is accepted, due to the similarity of the two fields. Because hardware engineers commonly work with computer software systems, a background in computer programming usually is needed. According to BLS, “a computer engineering major is similar to electrical engineering but with some computer science courses added to the curriculum”. Some large firms or specialized jobs require a master’s degree. It is also important for computer engineers to keep up with rapid advances in technology. Therefore, many continue learning throughout their careers.

 

Milky Way

 

MILKY WAY

The Milky Way is a galaxy that contains our Solar System.[18][19][20][nb 1] Its name “milky” is derived from its appearance as a dim glowing band arching across the night sky whose individual stars cannot be distinguished by the naked eye. The term “Milky Way” is a translation of the Latin via lactea, from the Greek γαλαξίας κύκλος (galaxías kýklos, “milky circle”).[21][22][23] From Earth the Milky Way appears as a band because its disk-shaped structure is viewed from within. Galileo Galilei first resolved the band of light into individual stars with his telescope in 1610. Until the early 1920s most astronomers thought that the Milky Way contained all the stars in the Universe. Following the 1920 Great Debate between the astronomers Harlow Shapley and Heber Curtis,[24] observations by Edwin Hubble showed that the Milky Way is just one of many galaxies—now known to number in the billions.[25]

The Milky Way is a barred spiral galaxy that has a diameter usually considered to be roughly 100,000–120,000 light-years[26] but may be 150,000–180,000 light-years.[27] The Milky Way is estimated to contain 200–400 billion stars,[28] although this number may be as high as one trillion.[29] There are probably at least 100 billion planets in the Milky Way.[30][31] The Solar System is located within the disk, about 27,000 light-years from the Galactic Center, on the inner edge of one of the spiral-shaped concentrations of gas and dust called the Orion Arm. The stars in the inner ≈10,000 light-years form a bulge and one or more bars that radiate from the bulge. The very center is marked by an intense radio source, named Sagittarius A*, which is likely to be a supermassive black hole.

Stars and gases at a wide range of distances from the Galactic Center orbit at approximately 220 kilometers per second. The constant rotation speed contradicts the laws of Keplerian dynamics and suggests that much of the mass of the Milky Way does not emit or absorb electromagnetic radiation. This mass has been given the name “dark matter”.[32] The rotational period is about 240 million years at the position of the Sun.[15] The Milky Way as a whole is moving at a velocity of approximately 600 km per second with respect to extragalactic frames of reference. The oldest stars in the Milky Way are nearly as old as the Universe itself and thus must have formed shortly after the Big Bang.[9]

The Milky Way has several satellite galaxies and is part of the Local Group of galaxies, which is a component of the Virgo Supercluster, which again is a component of the Laniakea Supercluster.
OBSERVATION DATA

Type Sb, Sbc, or SB(rs)bc[1][2] (barred spiral galaxy)

Diameter 100–180 kly (31–55 kpc)[3]

Thickness of thin stellar disk ≈2 kly (0.6 kpc)[4][5]

Number of stars 200–400 billion (3×1011 ±1×1011)[6][7][8]

Oldest known star ≥13.7 Gyr[9]

Mass 0.8–1.5×1012 M☉[10][11][12]

Angular momentum ≈1×1067 J s[13]

Sun’s distance to Galactic Center 27.2 ± 1.1 kly (8.34 ± 0.34 kpc)[14]

Sun’s Galactic rotation period 240 Myr[15]

Spiral pattern rotation period 220–360 Myr[16]

Bar pattern rotation period 100–120 Myr[16]

Speed relative to CMB rest frame 552 ± 6 km/s[17]

Escape velocity at Sun’s position 550 km/s[12]

Dark matter density at Sun’s position

Did You Know

 

Did You Know :
the most commonly used letter in the alphabet is E
Did You Know :
the longest street in the world is Yonge street in Toronto Canada measuring 1,896 km (1,178 miles)
Did You Know :
honey is the only natural food which never spoils
Did You Know :
the Internet was originally called ARPANet (Advanced Research Projects Agency Network) designed by the US department of defense
Did You Know :
lightning strikes the Earth 6,000 times every minute
Did You Know :
if you add up all the numbers from 1 to 100 consecutively (1 + 2 + 3…) it totals 5050
Did You Know :
a duck can’t walk without bobbing its head
Did You Know :
the Arctic Ocean is the smallest in the world

Why are flexible computer screens taking so long to develop?

Why are flexible computer screens taking so long to develop?

It’s common to first see exciting new technologies in science fiction, but less so in stories about wizards and dragons. Yet one of the most interesting bits of kit on display at this year’s Consumer Electronics Show (CES) in Las Vegas was reminiscent of the magical Daily Prophet newspaper in the Harry Potter series.

Thin, flexible screens such as the one showcased by LG could allow the creation of newspapers that change daily, display video like a tablet computer, but that can still be rolled up and put in your pocket. These plastic electronic displays could also provide smartphones with shatterproof displays (good news for anyone who’s inadvertently tried drop-testing their phone onto the pavement) and lead to the next generation of flexible wearable technology.

But LG’s announcement is not the first time that flexible displays has been demonstrated at CES. We’ve seen similar technologies every year for some time now, and LG itself unveiled another prototype in a press release 18 months ago. Yet only a handful of products have come to market that feature flexible displays, and those have the displays mounted in a rigid holder, rather than free for the user to bend. So why is this technology taking so long to reach our homes?

How displays work

Take a look at your computer screen through a magnifying glass and you’ll see the individual pixels, each made up of three subpixels – red, green, and blue light sources. Each of these subpixels is connected via a grid of wires that criss-cross the back of the display to another circuit called a display driver. This translates incoming video data into signals that turn each subpixel on and off.

Why are flexible computer screens taking so long to develop?

How each pixel generates light varies depending on the technology used. Two of the most common seen today are liquid crystal displays (LCDs) and organic light emitting diodes (OLEDs). LCDs use a white light at the back of the display that passes through red, green and blue colour filters. Each subpixel uses a combination of liquid crystals and polarising filters that act like tiny shutters, either letting light through or blocking it.

OLEDs, on the other hand, are mini light sources that directly generate light when turned on. This removes the need for the white light behind the display, reducing its overall thickness, and is one of the driving factors behind the growing uptake of OLED technology.

The challenges

Whatever technology is used, there are many individual components crammed into a relatively small space. Many smartphone displays contain more than three million subpixels, for example. Bending these components introduces strain, which can tear electrical connections and peel apart layers. Current displays use a rigid piece of glass, to keep the display safe from the mechanical strains of the outside world. Something that, by design, is not an option in flexible displays.

Organic semiconductors – the chemicals that directly produce light in OLED displays – have the additional problem of being highly sensitive to both water vapour and oxygen, gases that can pass relatively easily through thin plastic films. This can result in faded and dead pixels, leaving a less than desirable-looking result.

Why are flexible computer screens taking so long to develop?

 

There’s also the challenge of the large-scale manufacturing of these circuits. Plastics can be tricky materials to work with. They often swell and shrink in response to water and heat, and it can be difficult to persuade materials to bond to it. In a manufacturing environment, where precise alignment and high temperature processing are critical, this can cause major issues.

Finally, it’s not just flexible displays that need to be developed. The components needed to power and operate the display also need to be incorporated into any overall design, placing constraints on the kinds of shape and size currently achievable.

What next?

Scientists in Japan have demonstrated how to make electrical circuits on plastic thinner than the width of human hair in an attempt to reduce the impact of bending on circuit performance. And research into flexible batteries has started to become more prevalent, too.

Developing solutions to these problems is part of a broader area of active research, as the science and technology underlying flexible displays is also applicable to many other fields, such as biomedical devices and solar energy. While the challenges remain, the technology edges closer to the point where devices such as flexible displays will become ubiquitous in our everyday lives.

International team to develop remote-controlled machines to clean-up nuclear sites

Engineers are developing remote-controlled submersibles to help in the clean up of nuclear sites.

The technology is being designed to assess radiation – particularly neutron and gamma ray fields – under water to check the safety and stability of material within submerged areas of nuclear sites such as Fukushima Daiichi.

The technology could also be used to speed up the removal of nuclear waste from decaying storage ponds at the Sellafield Reprocessing facility in Cumbria, thereby shortening decommissioning programmes and potentially delivering cost savings.

Led by engineers at Lancaster University – and involving colleagues at Manchester University, Hybrid Instruments plus partners in Japan – the EPSRC-funded research project will develop a remote-controlled vehicle that can go into these environments to assess radiation levels.

The Fukushima nuclear plant was struck by a tsunami following an earthquake in March 2011. Three of the plant’s six reactors were damaged and had to be flooded with seawater to keep them cool and prevent more damage.

Similarly, nuclear fuel debris needs to be removed to enable safe decommissioning of the reactors; however it is not known how much there is, its condition and the likelihood of accidental reactions being triggered. New detection instruments developed through the project are expected to help identify nuclear fuel and help operators to deal with it safely.

Malcolm Joyce, Professor of Nuclear Engineering at Lancaster University and lead author of the research, said: “A key task is the removal of the nuclear fuel from the reactors. Once this is removed and stored safely elsewhere, radiation levels fall significantly making the plant much more safer, and cheaper, to decommission.

“Our research will focus on developing a remote-operated submersible vehicle with detection instruments that will be able to identify the radioactive sources. This capability does not currently exist and it would enable clean-up of the stricken Fukushima reactors to continue.”

Engineers at Lancaster University will concentrate on radiation detection technology whilst the Manchester team will concentrate of developing the remote-operated vehicle.

Barry Lennox, Professor of Applied Control at the University of Manchester said: “A key challenge with the remote-operated vehicle will be to design it so that it can fit through the small access ports typically available in nuclear facilities. These ports can be less than 100mm in diameter, which will create significant challenges.

This two-and-a-half-year international research project also involves Japanese partners, including the Japan Atomic Energy Agency, the National Maritime Research Institute of Japan and the Nagaoka University of Technology.

There is potential for the resulting technology to also be used by the oil and gas sector for assessment of naturally occurring radioactive material in offshore fields.

Researchers on the project include Prof Malcolm Joyce and Dr James Taylor from Lancaster University, and Professor Barry Lennox and Dr Simon Watson from Manchester University.

Richest 1pc own more than the rest of us

PARIS: The richest 1 per cent of the world’s population now own more than the rest of us combined, aid group Oxfam said on Monday, on the eve of the World Economic Forum (WEF) in Davos.

“Runaway inequality has created a world where 62 people own as much wealth as the poorest half of the world’s population — a figure that has fallen from 388 just five years ago,” the anti-poverty agency said in its reported published ahead of the annual gathering of the world’s financial and political elites in Davos.

The report, entitled “An Economy for the 1pc”, states that women are disproportionately affected by the global inequality.

“One of the other key trends behind rising inequality set out in Oxfam International’s report is the falling share of national income going to workers in almost all developed and most developing countries… The majority of low paid workers around the world are women.”

Although world leaders have increasingly talked about the need to tackle inequality “the gap between the richest and the rest has widened dramatically in the past 12 months,” Oxfam said.

Oxfam’s prediction, made ahead of last year’s Davos meeting, that the richest 1pc would soon own more than the rest of us, “actually came true in 2015,” it added.

While the number of people living in extreme poverty halved between 1990 and 2010, the average annual income of the poorest 10pc has risen by less than $3-a-year in the past quarter of a century, a increase in individuals’ income of less than one cent a year, the report said.

More than 40 heads of state and government will attend the Davos forum which begins late Tuesday and will end on January 23.

Those heading to the Swiss resort town for the high-level annual gathering also include 2,500 “leaders from business and society”, the WEF said in an earlier statement.

Describing the theme — the Fourth Industrial Revolution — WEF founder Klaus Shwab has said it “refers to the fusion of technologies across the physical, digital and biological worlds which is creating entirely new capabilities and dramatic impacts on political, social and economic systems.”

Oxfam International Executive Director Winnie Byanima, who will also attend Davos having co-chaired last year’s event, said: “It is simply unacceptable that the poorest half of the world’s population owns no more than a few dozen super-rich people who could fit onto one bus.”

Information technology

Information technology (IT) is the application of computers and telecommunications equipment to store, retrieve, transmit and manipulate data,often in the context of a business or other enterprise.

The term is commonly used as a synonym for computers and computer networks, but it also encompasses other information distribution technologies such as television and telephones. Several industries are associated with information technology, including computer hardware, software, electronics, semiconductors, internet, telecom equipment, engineering, healthcare, e-commerce and computer services.

Humans have been storing, retrieving, manipulating and communicating information since the Sumerians in Mesopotamia developed writing in about 3000 BC, but the term information technology in its modern sense first appeared in a 1958 article published in the Harvard Business Review; authors Harold J. Leavitt and Thomas L. Whisler commented that “the new technology does not yet have a single established name. We shall call it information technology (IT).” Their definition consists of three categories: techniques for processing, the application of statistical and mathematical methods to decision-making, and the simulation of higher-order thinking through computer programs.

Based on the storage and processing technologies employed, it is possible to distinguish four distinct phases of IT development: pre-mechanical (3000 BC – 1450 AD), mechanical (1450–1840), electromechanical (1840–1940) and electronic (1940–present). This article focuses on the most recent period (electronic), which began in about 1940.

History of computer technology

Devices have been used to aid computation for thousands of years, probably initially in the form of a tally stick.The Antikythera mechanism, dating from about the beginning of the first century BC, is generally considered to be the earliest known mechanical analog computer, and the earliest known geared mechanism. Comparable geared devices did not emerge in Europe until the 16th century, and it was not until 1645 that the first mechanical calculator capable of performing the four basic arithmetical operations was developed.

Electronic computers, using either relays or valves, began to appear in the early 1940s. The electromechanical Zuse Z3, completed in 1941, was the world’s first programmable computer, and by modern standards one of the first machines that could be considered a complete computing machine. Colossus, developed during the Second World War to decrypt German messages was the first electronic digital computer. Although it was programmable, it was not general-purpose, being designed to perform only a single task. It also lacked the ability to store its program in memory; programming was carried out using plugs and switches to alter the internal wiring. The first recognisably modern electronic digital stored-program computer was the Manchester Small-Scale Experimental Machine (SSEM), which ran its first program on 21 June 1948.

The development of transistors in the late 1940s at Bell Laboratories allowed a new generation of computers to be designed with greatly reduced power consumption. The first commercially available stored-program computer, the Ferranti Mark I, contained 4050 valves and had a power consumption of 25 kilowatts. By comparison the first transistorised computer, developed at the University of Manchester and operational by November 1953, consumed only 150 watts in its final version.

Data storage

Early electronic computers such as Colossus made use of punched tape, a long strip of paper on which data was represented by a series of holes, a technology now obsolete. Electronic data storage, which is used in modern computers, dates from World War II, when a form of delay line memory was developed to remove the clutter from radar signals, the first practical application of which was the mercury delay line. The first random-access digital storage device was the Williams tube, based on a standard cathode ray tube,  but the information stored in it and delay line memory was volatile in that it had to be continuously refreshed, and thus was lost once power was removed. The earliest form of non-volatile computer storage was the magnetic drum, invented in 1932 and used in the Ferranti Mark 1, the world’s first commercially available general-purpose electronic computer.

IBM introduced the first hard disk drive in 1956, as a component of their 305 RAMAC computer system. Most digital data today is still stored magnetically on hard disks, or optically on media such as CD-ROMs. Until 2002 most information was stored on analog devices, but that year digital storage capacity exceeded analog for the first time. As of 2007 almost 94% of the data stored worldwide was held digitally: 52% on hard disks, 28% on optical devices and 11% on digital magnetic tape. It has been estimated that the worldwide capacity to store information on electronic devices grew from less than 3 exabytes in 1986 to 295 exabytes in 2007, doubling roughly every 3 years.

Databases

Database management systems emerged in the 1960s to address the problem of storing and retrieving large amounts of data accurately and quickly. One of the earliest such systems was IBM’s Information Management System (IMS), which is still widely deployed more than 40 years later. IMS stores data hierarchically, but in the 1970s Ted Codd proposed an alternative relational storage model based on set theory and predicate logic and the familiar concepts of tables, rows and columns. The first commercially available relational database management system (RDBMS) was available from Oracle in 1980.

All database management systems consist of a number of components that together allow the data they store to be accessed simultaneously by many users while maintaining its integrity. A characteristic of all databases is that the structure of the data they contain is defined and stored separately from the data itself, in a database schema.

The extensible markup language (XML) has become a popular format for data representation in recent years. Although XML data can be stored in normal file systems, it is commonly held in relational databases to take advantage of their “robust implementation verified by years of both theoretical and practical effort”. As an evolution of the Standard Generalized Markup Language (SGML), XML’s text-based structure offers the advantage of being both machine and human-readable.

Data retrieval

The relational database model introduced a programming-language independent Structured Query Language (SQL), based on relational algebra.

The terms “data” and “information” are not synonymous. Anything stored is data, but it only becomes information when it is organized and presented meaningfully. Most of the world’s digital data is unstructured, and stored in a variety of different physical formats even within a single organization. Data warehouses began to be developed in the 1980s to integrate these disparate stores. They typically contain data extracted from various sources, including external sources such as the Internet, organized in such a way as to facilitate decision support systems (DSS).

Data transmission

Data transmission has three aspects: transmission, propagation, and reception. It can be broadly categorized as broadcasting, in which information is transmitted unidirectionally downstream, or telecommunications, with bidirectional upstream and downstream channels.

XML has been increasingly employed as a means of data interchange since the early 2000s,  particularly for machine-oriented interactions such as those involved in web-oriented protocols such as SOAP, describing “data-in-transit rather than … data-at-rest”. One of the challenges of such usage is converting data from relational databases into XML Document Object Model (DOM) structures

Data manipulation

 

Hil1bert and Lopez identify the exponential pace of technological change (a kind of Moore’s law): machines’ application-specific capacity to compute information per capita roughly doubled every 14 months between 1986 and 2007; the per capita capacity of the world’s general-purpose computers doubled every 18 months during the same two decades; the global telecommunication capacity per capita doubled every 34 months; the world’s storage capacity per capita required roughly 40 months to double (every 3 years); and per capita broadcast information has doubled every 12.3 years.

Massive amounts of data are stored worldwide every day, but unless it can be analysed and presented effectively it essentially resides in what have been called data tombs: “data archives that are seldom visited”. To address that issue, the field of data mining – “the process of discovering interesting patterns and knowledge from large amounts of data” – emerged in the late 1980s.

Perspective

Academic perspective

In an academic context, the Association for Computing Machinery defines IT as “undergraduate degree programs that prepare students to meet the computer technology needs of business, government, healthcare, schools, and other kinds of organizations …. IT specialists assume responsibility for selecting hardware and software products appropriate for an organization, integrating those products with organizational needs and infrastructure, and installing, customizing, and maintaining those applications for the organization’s computer users.”

Commercial and employment perspective

In a business context, the Information Technology Association of America has defined information technology as “the study, design, development, application, implementation, support or management of computer-based information systems”. The responsibilities of those working in the field include network administration, software development and installation, and the planning and management of an organization’s technology life cycle, by which hardware and software are maintained, upgraded and replaced.

The business value of information technology lies in the automation of business processes, provision of information for decision making, connecting businesses with their customers, and the provision of productivity tools to increase efficiency.