What’s in your iPad?


Computers perform the same basic function: inputting, outputting, processing, and storing data. Also, most computers have the same basic components: input, output, memory, data path, and control. In other words, a computer needs input devices, output devices, storage, and a processor to function.

Liquid Crystal Display (LCD) – A display technology using a thin layer of liquid polymers that can be used to transmit or block light according to whether a charge is applied

Active Matrix Display – A LCD using a transistor to control the transmission of light at each individual pixel

Pixel – the smallest individual picture element. Screens are composed of hundred of thousands to millions organized in a matrix.

While there are a variety of ways to implement a touch screen, many tablets today use capacitive sensing. Since people are electrical conductors, if an insulator like glass is covered with a transparent conductor, touching distorts the electrostatic field of the screen, which results in a change in capacitance or storage of electrical energy. This technology can allow multiple touches simultaneously.

Input/Output Devices:

  • LCD display
  • Camera
  • Microphone
  • Headphone jack
  • Speakers
  • Accelerometer
  • Gyroscope
  • Wi-Fi network
  • Bluetooth network

Input and output devices dominate space in a device while data path, control and memory makeup a tiny portion of space.

Integrated Circuits (chips) – A device with dozens to millions of transistors

Central Processing Unit (CPU or processor) – The active part of the computer, which contains the data path and control and which adds numbers, test numbers, signals I/O device to activate, and so on. Data path performs arithmetic operations while control tells the data path, memory, and I/O device what to do according to the instructions of the program

Volatile Memory (Main or primary) – storage for programs and data for programs during runtime

  • Dynamic Random Access (DRAM) – A volatile chip that provides random access to any location with an access time of 50 nanoseconds
  • Static Random Access (SRAM) – A volatile chip that is faster and less dense than DRAM
  • Cache – A volatile, small, fast memory that acts as a buffer for a slower, larger memory

Nonvolatile Memory (Secondary) – hold data and programs between

  • Magnetic Disks – Composed of rotating platters coated with a magnetic recording material. Access times are 5 ~ 20 milliseconds
  • Flash Memory – Slower and cheaper than DRAM, yet it’s more expensive per bit and more power efficient than disks. Access times are 5 ~ 50 microseconds

Multiple DRAM chips work together to contain the instruction and data of a program.

Abstraction : Hardware and the lowest-level software such instruction set architecture and application binary interface (ABI).

Networks Advantages:

  • Communication – Exchange of information between computers at high speeds
  • Resource Sharing – Computers on the same network share I/O devices
  • Nonlocal Access – Remote access to your computer

With the dramatic rise in deployment of networking and increase in capacity, network technology became an integral part to the information revolution.

Software vs Hardware

Abstraction – Interpret or translate high-level operations into simple computer instructions


Hardware and software as hierarchical views

Types of System Software:

  1. Operating System – Supervising program manages the resources of a computer for the benefit of the programs that run on that computer
  2. Compiler – A program that translates high-level language statements into assembly language statements
  3. Assembler – A program that translates a symbolic version of instructions into the binary version

In order to communicate to hardware, you need to send electrical signals to it. The signals are categorized by on and off or 1 and 0. Moreover, hardware has a two letter alphabet with each letter as a binary digit or bit. Using bits for both instructions and data is a foundation of computing!

Even though hardware speaks in binary, humans do not which creates a barrier between programmers and their hardware. As a result, an assembler was introduced to translate machine instructions to binary.


High-level to machine language

High-level Language vs. Machine Language?

  1. More natural language
  2. Improved programmer productivity
  3. Allows programs to be independent of the computer

The Greatest Ideas in Computer Architecture

  • Moore’s Law – Integrated Circuits resources double every 18-24 months
    • Prediction in 1965 by Gordon Moore, Founder of Intel
    • Design with future of technology in mind vs present of technology
    • Represented by the graph below


      ‘up and to the right’ graph

  • Abstraction – Represent the design at different levels of representation
    • Increases productivity and decreases design time
    • Lower levels details are hidden to make it simple = higher level details

abstract painting

  • Common Case Efficiency – Enhance efficiency more than the efficiency of rare cases
    • Experimentation and measurement is required
    • Fast sport cars versus fast minivan?

Jaguar F-Type

  • Parallelism Efficiency – Performing operations in parallel
    • Increases performance
    • Represented by the jet engines on a plane below

Dual engines on jet

  • Pipelining Efficiency – Pattern of parallelism
    • Has a particular sequence with different stages
    • Represented by ventilation in data centers

Air ventilation of data centers

  • Prediction Efficiency – Easier to ask for forgiveness than permission
    • As long as prediction is not expensive and is accurate
    • Represented by the sky for weather forecasting

Weather forecasting based on clouds

  • Memory Structure – Required to be fast, large, and cheap
    • Memory speeds hinders performance while capacity limits unsolvable issues
    • Memory is one of the most expensive component in computers
    • Cache versus Random Access Memory (RAM) versus Hard  Disk Drive (HDD)
    • Represented by a pyramid with cache at the top and HDD at the bottom

Pyramid memory structure

  • Redundancy Dependency – Components for detecting  and resolving failures
    • Moral of the story is that any device can fail
    • Represented by emergency procedures when flying a plane

Emergency procedure for crashed plane

Evolution of Computers

This will be the beginning of a series of posts that will serve to enlighten the ever changing information technology industry.

Real Gross Output of Computer systems design and related services (in billions) (1)

2008 2009 2010 2011 2012 2013 2014 2015
269.8 266.2 291.8 312.0 331.3 335.3 348.1 354.3

Moore’s law refers to an observation made by Intel co-founder Gordon Moore in 1965. He noticed that the number of transistors per square inch on integrated circuits had doubled every year since their invention. (2)



Computer Applications:

  • Personal Computer (PC) – Most widely known application. Delivers good performance to single user at low cost and executes third part software.
  • Servers (S) – greater computing, storage, and input/output capacity. In general, servers place a greater emphasis on dependability, since one crash can be costly.
  • Supercomputers (SC) – tens of thousands of processors and many terabytes of memory. Mostly used for scientific developments like weather forecasting and oil exploration.
  • Embedded Computers (EC)- Run one application or set of related applications that are integrated within the hardware. There are many embedded computers around now.
  • Personal Mobile Devices (PMD) – Replacing the PC with drawback of not having traditional peripherals and being costly
  • Cloud Computing (CC) – Replacing the server with datacenters known as Warehouse Scale Computers

Issues of PostPC Era (PMD & CC) are the parallel nature of processor and the hierarchical nature of memories. Despite the issues, many professionals still believe that Moore’s Law holds substance in the evolution of the computer based on the graph below.


By reading this series you will gain an understanding of:

  1. Programming in high level languages such as C and Java
  2. Interfacing hardware and software
  3. The performance of a program and how to improve performance
  4. Techniques used to improve performance and energy efficiency for hardware designers
  5. Pros and Cons of sequential and parallel processing
  6. The great ideas in the computer world


(1) – U.S. Bureau of Economic Analysis

(2) – Investopedia