Did You Know

 

Did You Know :
the most commonly used letter in the alphabet is E
Did You Know :
the longest street in the world is Yonge street in Toronto Canada measuring 1,896 km (1,178 miles)
Did You Know :
honey is the only natural food which never spoils
Did You Know :
the Internet was originally called ARPANet (Advanced Research Projects Agency Network) designed by the US department of defense
Did You Know :
lightning strikes the Earth 6,000 times every minute
Did You Know :
if you add up all the numbers from 1 to 100 consecutively (1 + 2 + 3…) it totals 5050
Did You Know :
a duck can’t walk without bobbing its head
Did You Know :
the Arctic Ocean is the smallest in the world

Why are flexible computer screens taking so long to develop?

Why are flexible computer screens taking so long to develop?

It’s common to first see exciting new technologies in science fiction, but less so in stories about wizards and dragons. Yet one of the most interesting bits of kit on display at this year’s Consumer Electronics Show (CES) in Las Vegas was reminiscent of the magical Daily Prophet newspaper in the Harry Potter series.

Thin, flexible screens such as the one showcased by LG could allow the creation of newspapers that change daily, display video like a tablet computer, but that can still be rolled up and put in your pocket. These plastic electronic displays could also provide smartphones with shatterproof displays (good news for anyone who’s inadvertently tried drop-testing their phone onto the pavement) and lead to the next generation of flexible wearable technology.

But LG’s announcement is not the first time that flexible displays has been demonstrated at CES. We’ve seen similar technologies every year for some time now, and LG itself unveiled another prototype in a press release 18 months ago. Yet only a handful of products have come to market that feature flexible displays, and those have the displays mounted in a rigid holder, rather than free for the user to bend. So why is this technology taking so long to reach our homes?

How displays work

Take a look at your computer screen through a magnifying glass and you’ll see the individual pixels, each made up of three subpixels – red, green, and blue light sources. Each of these subpixels is connected via a grid of wires that criss-cross the back of the display to another circuit called a display driver. This translates incoming video data into signals that turn each subpixel on and off.

Why are flexible computer screens taking so long to develop?

How each pixel generates light varies depending on the technology used. Two of the most common seen today are liquid crystal displays (LCDs) and organic light emitting diodes (OLEDs). LCDs use a white light at the back of the display that passes through red, green and blue colour filters. Each subpixel uses a combination of liquid crystals and polarising filters that act like tiny shutters, either letting light through or blocking it.

OLEDs, on the other hand, are mini light sources that directly generate light when turned on. This removes the need for the white light behind the display, reducing its overall thickness, and is one of the driving factors behind the growing uptake of OLED technology.

The challenges

Whatever technology is used, there are many individual components crammed into a relatively small space. Many smartphone displays contain more than three million subpixels, for example. Bending these components introduces strain, which can tear electrical connections and peel apart layers. Current displays use a rigid piece of glass, to keep the display safe from the mechanical strains of the outside world. Something that, by design, is not an option in flexible displays.

Organic semiconductors – the chemicals that directly produce light in OLED displays – have the additional problem of being highly sensitive to both water vapour and oxygen, gases that can pass relatively easily through thin plastic films. This can result in faded and dead pixels, leaving a less than desirable-looking result.

Why are flexible computer screens taking so long to develop?

 

There’s also the challenge of the large-scale manufacturing of these circuits. Plastics can be tricky materials to work with. They often swell and shrink in response to water and heat, and it can be difficult to persuade materials to bond to it. In a manufacturing environment, where precise alignment and high temperature processing are critical, this can cause major issues.

Finally, it’s not just flexible displays that need to be developed. The components needed to power and operate the display also need to be incorporated into any overall design, placing constraints on the kinds of shape and size currently achievable.

What next?

Scientists in Japan have demonstrated how to make electrical circuits on plastic thinner than the width of human hair in an attempt to reduce the impact of bending on circuit performance. And research into flexible batteries has started to become more prevalent, too.

Developing solutions to these problems is part of a broader area of active research, as the science and technology underlying flexible displays is also applicable to many other fields, such as biomedical devices and solar energy. While the challenges remain, the technology edges closer to the point where devices such as flexible displays will become ubiquitous in our everyday lives.

International team to develop remote-controlled machines to clean-up nuclear sites

Engineers are developing remote-controlled submersibles to help in the clean up of nuclear sites.

The technology is being designed to assess radiation – particularly neutron and gamma ray fields – under water to check the safety and stability of material within submerged areas of nuclear sites such as Fukushima Daiichi.

The technology could also be used to speed up the removal of nuclear waste from decaying storage ponds at the Sellafield Reprocessing facility in Cumbria, thereby shortening decommissioning programmes and potentially delivering cost savings.

Led by engineers at Lancaster University – and involving colleagues at Manchester University, Hybrid Instruments plus partners in Japan – the EPSRC-funded research project will develop a remote-controlled vehicle that can go into these environments to assess radiation levels.

The Fukushima nuclear plant was struck by a tsunami following an earthquake in March 2011. Three of the plant’s six reactors were damaged and had to be flooded with seawater to keep them cool and prevent more damage.

Similarly, nuclear fuel debris needs to be removed to enable safe decommissioning of the reactors; however it is not known how much there is, its condition and the likelihood of accidental reactions being triggered. New detection instruments developed through the project are expected to help identify nuclear fuel and help operators to deal with it safely.

Malcolm Joyce, Professor of Nuclear Engineering at Lancaster University and lead author of the research, said: “A key task is the removal of the nuclear fuel from the reactors. Once this is removed and stored safely elsewhere, radiation levels fall significantly making the plant much more safer, and cheaper, to decommission.

“Our research will focus on developing a remote-operated submersible vehicle with detection instruments that will be able to identify the radioactive sources. This capability does not currently exist and it would enable clean-up of the stricken Fukushima reactors to continue.”

Engineers at Lancaster University will concentrate on radiation detection technology whilst the Manchester team will concentrate of developing the remote-operated vehicle.

Barry Lennox, Professor of Applied Control at the University of Manchester said: “A key challenge with the remote-operated vehicle will be to design it so that it can fit through the small access ports typically available in nuclear facilities. These ports can be less than 100mm in diameter, which will create significant challenges.

This two-and-a-half-year international research project also involves Japanese partners, including the Japan Atomic Energy Agency, the National Maritime Research Institute of Japan and the Nagaoka University of Technology.

There is potential for the resulting technology to also be used by the oil and gas sector for assessment of naturally occurring radioactive material in offshore fields.

Researchers on the project include Prof Malcolm Joyce and Dr James Taylor from Lancaster University, and Professor Barry Lennox and Dr Simon Watson from Manchester University.

Richest 1pc own more than the rest of us

PARIS: The richest 1 per cent of the world’s population now own more than the rest of us combined, aid group Oxfam said on Monday, on the eve of the World Economic Forum (WEF) in Davos.

“Runaway inequality has created a world where 62 people own as much wealth as the poorest half of the world’s population — a figure that has fallen from 388 just five years ago,” the anti-poverty agency said in its reported published ahead of the annual gathering of the world’s financial and political elites in Davos.

The report, entitled “An Economy for the 1pc”, states that women are disproportionately affected by the global inequality.

“One of the other key trends behind rising inequality set out in Oxfam International’s report is the falling share of national income going to workers in almost all developed and most developing countries… The majority of low paid workers around the world are women.”

Although world leaders have increasingly talked about the need to tackle inequality “the gap between the richest and the rest has widened dramatically in the past 12 months,” Oxfam said.

Oxfam’s prediction, made ahead of last year’s Davos meeting, that the richest 1pc would soon own more than the rest of us, “actually came true in 2015,” it added.

While the number of people living in extreme poverty halved between 1990 and 2010, the average annual income of the poorest 10pc has risen by less than $3-a-year in the past quarter of a century, a increase in individuals’ income of less than one cent a year, the report said.

More than 40 heads of state and government will attend the Davos forum which begins late Tuesday and will end on January 23.

Those heading to the Swiss resort town for the high-level annual gathering also include 2,500 “leaders from business and society”, the WEF said in an earlier statement.

Describing the theme — the Fourth Industrial Revolution — WEF founder Klaus Shwab has said it “refers to the fusion of technologies across the physical, digital and biological worlds which is creating entirely new capabilities and dramatic impacts on political, social and economic systems.”

Oxfam International Executive Director Winnie Byanima, who will also attend Davos having co-chaired last year’s event, said: “It is simply unacceptable that the poorest half of the world’s population owns no more than a few dozen super-rich people who could fit onto one bus.”

Information technology

Information technology (IT) is the application of computers and telecommunications equipment to store, retrieve, transmit and manipulate data,often in the context of a business or other enterprise.

The term is commonly used as a synonym for computers and computer networks, but it also encompasses other information distribution technologies such as television and telephones. Several industries are associated with information technology, including computer hardware, software, electronics, semiconductors, internet, telecom equipment, engineering, healthcare, e-commerce and computer services.

Humans have been storing, retrieving, manipulating and communicating information since the Sumerians in Mesopotamia developed writing in about 3000 BC, but the term information technology in its modern sense first appeared in a 1958 article published in the Harvard Business Review; authors Harold J. Leavitt and Thomas L. Whisler commented that “the new technology does not yet have a single established name. We shall call it information technology (IT).” Their definition consists of three categories: techniques for processing, the application of statistical and mathematical methods to decision-making, and the simulation of higher-order thinking through computer programs.

Based on the storage and processing technologies employed, it is possible to distinguish four distinct phases of IT development: pre-mechanical (3000 BC – 1450 AD), mechanical (1450–1840), electromechanical (1840–1940) and electronic (1940–present). This article focuses on the most recent period (electronic), which began in about 1940.

History of computer technology

Devices have been used to aid computation for thousands of years, probably initially in the form of a tally stick.The Antikythera mechanism, dating from about the beginning of the first century BC, is generally considered to be the earliest known mechanical analog computer, and the earliest known geared mechanism. Comparable geared devices did not emerge in Europe until the 16th century, and it was not until 1645 that the first mechanical calculator capable of performing the four basic arithmetical operations was developed.

Electronic computers, using either relays or valves, began to appear in the early 1940s. The electromechanical Zuse Z3, completed in 1941, was the world’s first programmable computer, and by modern standards one of the first machines that could be considered a complete computing machine. Colossus, developed during the Second World War to decrypt German messages was the first electronic digital computer. Although it was programmable, it was not general-purpose, being designed to perform only a single task. It also lacked the ability to store its program in memory; programming was carried out using plugs and switches to alter the internal wiring. The first recognisably modern electronic digital stored-program computer was the Manchester Small-Scale Experimental Machine (SSEM), which ran its first program on 21 June 1948.

The development of transistors in the late 1940s at Bell Laboratories allowed a new generation of computers to be designed with greatly reduced power consumption. The first commercially available stored-program computer, the Ferranti Mark I, contained 4050 valves and had a power consumption of 25 kilowatts. By comparison the first transistorised computer, developed at the University of Manchester and operational by November 1953, consumed only 150 watts in its final version.

Data storage

Early electronic computers such as Colossus made use of punched tape, a long strip of paper on which data was represented by a series of holes, a technology now obsolete. Electronic data storage, which is used in modern computers, dates from World War II, when a form of delay line memory was developed to remove the clutter from radar signals, the first practical application of which was the mercury delay line. The first random-access digital storage device was the Williams tube, based on a standard cathode ray tube,  but the information stored in it and delay line memory was volatile in that it had to be continuously refreshed, and thus was lost once power was removed. The earliest form of non-volatile computer storage was the magnetic drum, invented in 1932 and used in the Ferranti Mark 1, the world’s first commercially available general-purpose electronic computer.

IBM introduced the first hard disk drive in 1956, as a component of their 305 RAMAC computer system. Most digital data today is still stored magnetically on hard disks, or optically on media such as CD-ROMs. Until 2002 most information was stored on analog devices, but that year digital storage capacity exceeded analog for the first time. As of 2007 almost 94% of the data stored worldwide was held digitally: 52% on hard disks, 28% on optical devices and 11% on digital magnetic tape. It has been estimated that the worldwide capacity to store information on electronic devices grew from less than 3 exabytes in 1986 to 295 exabytes in 2007, doubling roughly every 3 years.

Databases

Database management systems emerged in the 1960s to address the problem of storing and retrieving large amounts of data accurately and quickly. One of the earliest such systems was IBM’s Information Management System (IMS), which is still widely deployed more than 40 years later. IMS stores data hierarchically, but in the 1970s Ted Codd proposed an alternative relational storage model based on set theory and predicate logic and the familiar concepts of tables, rows and columns. The first commercially available relational database management system (RDBMS) was available from Oracle in 1980.

All database management systems consist of a number of components that together allow the data they store to be accessed simultaneously by many users while maintaining its integrity. A characteristic of all databases is that the structure of the data they contain is defined and stored separately from the data itself, in a database schema.

The extensible markup language (XML) has become a popular format for data representation in recent years. Although XML data can be stored in normal file systems, it is commonly held in relational databases to take advantage of their “robust implementation verified by years of both theoretical and practical effort”. As an evolution of the Standard Generalized Markup Language (SGML), XML’s text-based structure offers the advantage of being both machine and human-readable.

Data retrieval

The relational database model introduced a programming-language independent Structured Query Language (SQL), based on relational algebra.

The terms “data” and “information” are not synonymous. Anything stored is data, but it only becomes information when it is organized and presented meaningfully. Most of the world’s digital data is unstructured, and stored in a variety of different physical formats even within a single organization. Data warehouses began to be developed in the 1980s to integrate these disparate stores. They typically contain data extracted from various sources, including external sources such as the Internet, organized in such a way as to facilitate decision support systems (DSS).

Data transmission

Data transmission has three aspects: transmission, propagation, and reception. It can be broadly categorized as broadcasting, in which information is transmitted unidirectionally downstream, or telecommunications, with bidirectional upstream and downstream channels.

XML has been increasingly employed as a means of data interchange since the early 2000s,  particularly for machine-oriented interactions such as those involved in web-oriented protocols such as SOAP, describing “data-in-transit rather than … data-at-rest”. One of the challenges of such usage is converting data from relational databases into XML Document Object Model (DOM) structures

Data manipulation

 

Hil1bert and Lopez identify the exponential pace of technological change (a kind of Moore’s law): machines’ application-specific capacity to compute information per capita roughly doubled every 14 months between 1986 and 2007; the per capita capacity of the world’s general-purpose computers doubled every 18 months during the same two decades; the global telecommunication capacity per capita doubled every 34 months; the world’s storage capacity per capita required roughly 40 months to double (every 3 years); and per capita broadcast information has doubled every 12.3 years.

Massive amounts of data are stored worldwide every day, but unless it can be analysed and presented effectively it essentially resides in what have been called data tombs: “data archives that are seldom visited”. To address that issue, the field of data mining – “the process of discovering interesting patterns and knowledge from large amounts of data” – emerged in the late 1980s.

Perspective

Academic perspective

In an academic context, the Association for Computing Machinery defines IT as “undergraduate degree programs that prepare students to meet the computer technology needs of business, government, healthcare, schools, and other kinds of organizations …. IT specialists assume responsibility for selecting hardware and software products appropriate for an organization, integrating those products with organizational needs and infrastructure, and installing, customizing, and maintaining those applications for the organization’s computer users.”

Commercial and employment perspective

In a business context, the Information Technology Association of America has defined information technology as “the study, design, development, application, implementation, support or management of computer-based information systems”. The responsibilities of those working in the field include network administration, software development and installation, and the planning and management of an organization’s technology life cycle, by which hardware and software are maintained, upgraded and replaced.

The business value of information technology lies in the automation of business processes, provision of information for decision making, connecting businesses with their customers, and the provision of productivity tools to increase efficiency.