Wafer of chips

5nm Chip

Preparing test wafers

Preparing test wafers with 5nm silicon nanosheet transistors

IBM group of researchers, Global Foundries and Samsung created a new transistor design based on a new inventive process that will lead to more speed and power efficiency at a lower cost. The reason for having a smaller size is to power self-driving cars, on-board AI and 5G sensors. Also, the pressure to keep up with Moore’s Law of 1965 needed to move to a new structure and allow for more transistors on one chip. During the fabrication process, these chips are constructed of horizontal FinFET structured layer with silicon nano sheets to create a fourth gate. Sadly, these chips will not meet the market until after the predecessor 7nm process chips do in 2018.

5nm nanosheet transistors

Silicon nanosheet transistors at 5nm

“As we make progress toward commercializing 7nm in 2018 at our Fab 8 manufacturing facility, we are actively pursuing next-generation technologies at 5nm and beyond to maintain technology leadership and enable our customers to produce a smaller, faster, and more cost efficient generation of semiconductors.” – Gary Patton, CTO and Head of Worldwide R&D at GlobalFoundries


The last major breakthrough came in 2009 with the creation of FinFET. The first manufacturing of FinFET was in 2012 with the 22nm process (now 7-10nm process).

First use of the 3D structure to control electric current, rather than the 2D ‘planar’ system of years past.

Maximizes the amount of current flow in the on state  and minimizes the amount of leakage in the off state which makes it more efficient.

“Fundamentally, FinFET structure is a single rectangle, with the three sides of a structure covered in gates” – Mukesh Khare, VP of Semiconductor  Research for IBM

Wafer of chips

Wafer of chips with 5nm silicon nanosheet transistors

Images courtesy of IBM

Optical Computing


The Ising model is a mathematical model which describes how magnetic materials have atomic spins with an upward or downward states. This mathematical model will lead to solving real-world business challenges such as the optimal delivery truck route and discovering new prescription drugs. Using the Ising model in optical computing is the next step for computer architecture since researchers are able to compute mathematical problems much faster than conventional computers. Due to the lack of progress to keep up with Moore’s Law, this new technology will help with discovering new advancements in many fields such as pharmaceutical and telecommunications. I fear that this will lead to more electronic waste when this type of computing is put on the market due to shift away from  computers to Ising model computers. However, my fear can be solved if researchers think about the global impact of designing a new chip for computer computations.

Strength: The articles were able to provide a simplified definition of an Ising model and give ample background information of the advancements of these models.

Weakness: The articles were not able to state the global impact of designing a totally new chip with regards to the massive electronic waste today.




What’s in your iPad?


Computers perform the same basic function: inputting, outputting, processing, and storing data. Also, most computers have the same basic components: input, output, memory, data path, and control. In other words, a computer needs input devices, output devices, storage, and a processor to function.

Liquid Crystal Display (LCD) – A display technology using a thin layer of liquid polymers that can be used to transmit or block light according to whether a charge is applied

Active Matrix Display – A LCD using a transistor to control the transmission of light at each individual pixel

Pixel – the smallest individual picture element. Screens are composed of hundred of thousands to millions organized in a matrix.

While there are a variety of ways to implement a touch screen, many tablets today use capacitive sensing. Since people are electrical conductors, if an insulator like glass is covered with a transparent conductor, touching distorts the electrostatic field of the screen, which results in a change in capacitance or storage of electrical energy. This technology can allow multiple touches simultaneously.

Input/Output Devices:

  • LCD display
  • Camera
  • Microphone
  • Headphone jack
  • Speakers
  • Accelerometer
  • Gyroscope
  • Wi-Fi network
  • Bluetooth network

Input and output devices dominate space in a device while data path, control and memory makeup a tiny portion of space.

Integrated Circuits (chips) – A device with dozens to millions of transistors

Central Processing Unit (CPU or processor) – The active part of the computer, which contains the data path and control and which adds numbers, test numbers, signals I/O device to activate, and so on. Data path performs arithmetic operations while control tells the data path, memory, and I/O device what to do according to the instructions of the program

Volatile Memory (Main or primary) – storage for programs and data for programs during runtime

  • Dynamic Random Access (DRAM) – A volatile chip that provides random access to any location with an access time of 50 nanoseconds
  • Static Random Access (SRAM) – A volatile chip that is faster and less dense than DRAM
  • Cache – A volatile, small, fast memory that acts as a buffer for a slower, larger memory

Nonvolatile Memory (Secondary) – hold data and programs between

  • Magnetic Disks – Composed of rotating platters coated with a magnetic recording material. Access times are 5 ~ 20 milliseconds
  • Flash Memory – Slower and cheaper than DRAM, yet it’s more expensive per bit and more power efficient than disks. Access times are 5 ~ 50 microseconds

Multiple DRAM chips work together to contain the instruction and data of a program.

Abstraction : Hardware and the lowest-level software such instruction set architecture and application binary interface (ABI).

Networks Advantages:

  • Communication – Exchange of information between computers at high speeds
  • Resource Sharing – Computers on the same network share I/O devices
  • Nonlocal Access – Remote access to your computer

With the dramatic rise in deployment of networking and increase in capacity, network technology became an integral part to the information revolution.

Software vs Hardware

Abstraction – Interpret or translate high-level operations into simple computer instructions


Hardware and software as hierarchical views

Types of System Software:

  1. Operating System – Supervising program manages the resources of a computer for the benefit of the programs that run on that computer
  2. Compiler – A program that translates high-level language statements into assembly language statements
  3. Assembler – A program that translates a symbolic version of instructions into the binary version

In order to communicate to hardware, you need to send electrical signals to it. The signals are categorized by on and off or 1 and 0. Moreover, hardware has a two letter alphabet with each letter as a binary digit or bit. Using bits for both instructions and data is a foundation of computing!

Even though hardware speaks in binary, humans do not which creates a barrier between programmers and their hardware. As a result, an assembler was introduced to translate machine instructions to binary.


High-level to machine language

High-level Language vs. Machine Language?

  1. More natural language
  2. Improved programmer productivity
  3. Allows programs to be independent of the computer

The Greatest Ideas in Computer Architecture

  • Moore’s Law – Integrated Circuits resources double every 18-24 months
    • Prediction in 1965 by Gordon Moore, Founder of Intel
    • Design with future of technology in mind vs present of technology
    • Represented by the graph below


      ‘up and to the right’ graph

  • Abstraction – Represent the design at different levels of representation
    • Increases productivity and decreases design time
    • Lower levels details are hidden to make it simple = higher level details

abstract painting

  • Common Case Efficiency – Enhance efficiency more than the efficiency of rare cases
    • Experimentation and measurement is required
    • Fast sport cars versus fast minivan?

Jaguar F-Type

  • Parallelism Efficiency – Performing operations in parallel
    • Increases performance
    • Represented by the jet engines on a plane below

Dual engines on jet

  • Pipelining Efficiency – Pattern of parallelism
    • Has a particular sequence with different stages
    • Represented by ventilation in data centers

Air ventilation of data centers

  • Prediction Efficiency – Easier to ask for forgiveness than permission
    • As long as prediction is not expensive and is accurate
    • Represented by the sky for weather forecasting

Weather forecasting based on clouds

  • Memory Structure – Required to be fast, large, and cheap
    • Memory speeds hinders performance while capacity limits unsolvable issues
    • Memory is one of the most expensive component in computers
    • Cache versus Random Access Memory (RAM) versus Hard  Disk Drive (HDD)
    • Represented by a pyramid with cache at the top and HDD at the bottom

Pyramid memory structure

  • Redundancy Dependency – Components for detecting  and resolving failures
    • Moral of the story is that any device can fail
    • Represented by emergency procedures when flying a plane

Emergency procedure for crashed plane

Evolution of Computers

This will be the beginning of a series of posts that will serve to enlighten the ever changing information technology industry.

Real Gross Output of Computer systems design and related services (in billions) (1)

2008 2009 2010 2011 2012 2013 2014 2015
269.8 266.2 291.8 312.0 331.3 335.3 348.1 354.3

Moore’s law refers to an observation made by Intel co-founder Gordon Moore in 1965. He noticed that the number of transistors per square inch on integrated circuits had doubled every year since their invention. (2)



Computer Applications:

  • Personal Computer (PC) – Most widely known application. Delivers good performance to single user at low cost and executes third part software.
  • Servers (S) – greater computing, storage, and input/output capacity. In general, servers place a greater emphasis on dependability, since one crash can be costly.
  • Supercomputers (SC) – tens of thousands of processors and many terabytes of memory. Mostly used for scientific developments like weather forecasting and oil exploration.
  • Embedded Computers (EC)- Run one application or set of related applications that are integrated within the hardware. There are many embedded computers around now.
  • Personal Mobile Devices (PMD) – Replacing the PC with drawback of not having traditional peripherals and being costly
  • Cloud Computing (CC) – Replacing the server with datacenters known as Warehouse Scale Computers

Issues of PostPC Era (PMD & CC) are the parallel nature of processor and the hierarchical nature of memories. Despite the issues, many professionals still believe that Moore’s Law holds substance in the evolution of the computer based on the graph below.


By reading this series you will gain an understanding of:

  1. Programming in high level languages such as C and Java
  2. Interfacing hardware and software
  3. The performance of a program and how to improve performance
  4. Techniques used to improve performance and energy efficiency for hardware designers
  5. Pros and Cons of sequential and parallel processing
  6. The great ideas in the computer world


(1) – U.S. Bureau of Economic Analysis

(2) – Investopedia