Updated 2014-06-06 03:02:30 by AMG

Computer Architecture is the design of computers and their components.

See Also  edit

mathematical foundation of computers and programs
Three major computing quantizers 1: Sockets
Integer (Computer Science)

Description  edit

TP 2008-10-20: I recently found the Elements of Computing Systems AKA From NAND to Tetris - Building a Modern Computer from First Principles, for those software guys & gals (like me) who want to learn more about digital logic, gates, etc. Designed as a mid-level college course, it starts with NAND and progresses from digital logic to building gates, higher level chips, finally culminating into a CPU. From there, the book goes into software from basic assembly language to building a VM, compiler, and OS, based on the CPU built earlier. Resources include book, Google Tech Talk video, software (HDL simulator, CPU emulator, VM, etc.), study plans. I just purchased the book, and hope to work through the problems.

Description by Theo Verelst  edit

I'm sure it is useful, I'm not sure it is popular, and I'm also sure a lot of things would be less miserable with some more knowledge about the subject.

Of course feel free to add (preferably identified)

My official, and finished education in the area is that I have a Masters degree in Electrical Engineering, and before I left my position for reasons of completely different nature than content, I was maybe between a few months or a year from a PhD in the area of advanced computer design, at the then network theory section I also graduated. Not that I think mightily high of all fellow workers from that time, but at least I'm qualified. Not that I normally care that much, but when it concerns knowledge about computers, and especially when certain circuits and people in it are involved, I take it to be essential that I make such clear.

Computers existed in some form already long ago, I think it may have been in anchien china when mechanical adding machines were invented, and certainly telephone exchanges from a small century ago where computing some complicated things. In second world war, analog computers were used to compute bomb trajectories for the nonlinear (quadric I guess) influence of air resistance and wind on the bombs.

A bit later, tube based computers were tried, applying binary principles. I think boolean algebra was a lot older than that, but I shuld look that up. Such machine would produce amazing amounts of heat, and use a lot of power, and of course every our or day one of the tubes would blow, and it would have to be fixed.

Things started to accelerate after the transistor became available cheaply, and especially when the first and further digital chips appeared and even became cheaply available. I know this from experience since about 1977 or so, when as a still beginning teenager I bought those for hobbying together (working) circuits). That was a little while before the advancing technology brought forth the Z80, and the PET and TRS80 and other APPLE computers started to become widely sold, in the time when the intel 8080 processor was known (I had a book on it, but hardly dreamt of owning one...).

In about 1979 (from memory) serious versions of the trs80 and other microcomputer systems become consumer goods as well, that is they were widely used outside business and for much lower prices.

Before that, many machines of great innovative value were made in medium and major business and science sense, such as supercomputers, all kinds of (IBM) mainframes, and the interesting pdp 9 and 11 and others.

It was on such and also some smaller (for instance cpm based) systems that most principles were experimented with which 20, 30 years later are still fashionable such as disc based operating systems, multi tasking, multi user operation, memory management such as paging and virtual memory, shared libraries, and also caching, pipelining, and parallel systems (for instance in early and later supercomputers).

Remember that our modern and desirable (mean that, too) linux comes in many senses directly from easily 20 year and older quality course files, including the X windows based windows handling, which is from before apple and MS windows, and atari and such.

A long introduction to make a main point that computers and their software have undergone a quite logical historical growth, and should not be taken as the product of some hidden and and obscure process of some people having power over bits.

Processor

The heart of a computer, which consists of all kinds of registers (small internal memory locations), counters (such as the program counter and stack pointer), and Arithmethal and Logical Unit, which can perform all kinds of computations such as adding and subtracting, and all kinds of bus-es and connection adapters to the outside world, primarily to the main memory and some perifiarals.

Memory

The main memory of a single (non parallel) computer system of the ordinary and normally to be assumed von Neuman type of architecture is a big list of storage locations organised as a row of bytes, double words or even 4 or 8 bytes in parallel. Mostly, the actual memory chips are simply linearly addressed, possible in blocks of memory. Memory can be read from (non-destructive) and written to by applying digital 'read' or 'write' signals, in addition to a physical (binary) address to access a certian memory location, and a data word, which is a number of bits, usually a small integer times 8 bits, which can be written into the memory, or read from it.

No matter what type of object oriented net enabled new hype bladibla language one used, every ordinary PC and workstation like computer has these types of concepts in the very heart of its operation.

Paging means arranging memory in pages of for instance a kilobyte large, which can be addressed by just the higher order bits of their address and treated as a unit. In many computer systems, pages of memory can be written to disc, and the address space 'seen' by a program or programmer can be mapped to actual pages in the physical memory or on temporary disc storage, so that the total virtual memory is larger than the actual physical memory.

The paging table points to where an actual memory page is, based on the upper part of its address. Cache mechanisms can also make use of the page principle to cache or write through a page of virtual memory in cache to the main memory or disc storage. A cache is an intermediate memory between the processor and the large, linear main memory chips, which acts as a buffer to not have to wait for data to arrive from or be stored in the main memory. It often is close to the processor with a fast electronical access path, and is smaller, has faster response time, and is in a certain sense associative, that is it has more virtual than actual address spaces. A cache can determine if a word of data or page which the processor wants to access is stored in it's memory cells or not, and also it can 'write through' or 'read in' data to or from the main memory when it is asked by the processor to access a certain word.

Instruction fetch and execution

A processor runs basically by having a binary counter which remembers where it is running a program in memory, accessing the memory location that counter points to, fetch the processor machine instruction form that memory location, incrementing the program counter, interpreting the instruction it read, possible read subsequent memory locations for data belonging to that instruction, and performing the action which is defined by the instruction code, such as adding a processor internal registers content with a certain memory location and storing the result in some other register. Then this game start again from the next memory location.

A program is a string of instuctions, or a list of them.

There are also instuctions which chance the course of program execution, in other words which change the next instruction the PC points to, and also to do so conditionally, for instance when the result of the last addition was negative or zero only.

Perifiarals External devices connected to the processor and memory system, such as disc drives, network hardware, display generators and monitors, printers, keyboard and mouse, etc.

The processor needs some kind of way in its digital connections to have access to these kinds of devices, for instance they can be assigned a special (virtual) address, which isn't connected to a memory, but to some unit able to receive and return data over it. A communicatio protocol is needed to make all units communicate in a well defined and errorless way with the processor.

Machine Code The processor has a set of possible instuctions, such as memory access (move/load), adding, logical operations, test instructions, branch (jump) and subroutine instructions.

To remember all such isntructions easily, they are usually referenced by a shorthand notation called mnenonics, such as LD ADD JUMP, GOSUB/CALL, COMP.

A list of such instruction, with their arguments on a row is called machine language, which can be translated into digital strings of bytes according to the appropriate translation of every instruction into its instruction code, which the processor understands.

An assembler does this rather direct translation, and a linker can be used to resolve indirect references, such as where a programmer instructs the assembler to load from a certain memory location which he only refers to by a label, and not the actual address. The linker can be instucted to fill in the actual address after that has been determined for isntance by taking the first available space after the whole program has been assembled. This is called resolving indirections.

High Level Programming Languages

A HLPL usually has a clear stucture, and methods to enforce such structure, such as procedures or functions and programming constructs for the most well known control structures such as conditionals (if's) for loops, while loops, and assignment and vairable referencing mechanisms. Also, language suitable for low level operations need some kinds of pointer implementation, where the programmer can directly access memory locations.

Most modern languages adhere quite a bit to the functional programming paradigm at some level, which means the idea of a function is important. A function is a unit of programming or associated object code which can be called by name, and possibly has one or more arguments, which are data or pointers to data which the function are handed before it starts to execute. When it is done, the function return processor control or program flow to the calling function, which could be the main function, that is the highest function level of a program, and it can return a value at that point, which is the result of the function called, availble to the calling function.

Such possible tree of functions by construction never fails to return as long as each subfunction returns at some point, and can be of arbitrary complexity (calling depth) unless we run out of resourses, such as space on the stack where amuong other things arguments are stored.

High level languages take away a lot of work from the programmer compared to the processor specific machine language programming, and usually offers support for complex mathematical operations, easy to use program flow constructs, functions, input and output functions, and more.

In principle, a HLPL generates the equivalent of machine code, which is then assembled, and linked. The gnu C and C++ compiler can even be instructed to generated explicit, human readable machine code.

Library Most HLPL's know the concept of a library of functions, which is like a set of functions which can be linked with a program which are not necessarily written by the programmer him/herself, in which case one can also use standard libraries, which go with the language.

Libraries should contain well defined functions which a programmer can call up and use when his program requires them. The linker can take certain functions from a library during the link phase of the program compilation to go from the program source code to the actual executable, the program which the operating sysem or computer can understand and run. That way the program contains those functions from the library actually called in the progam, or the whole library can be added to the executable, which is less efficient, but easier.

We can also store the library in the computer memory before the program starts, and reference and use functions in it dynamically, even by more than one program, which requires functions in it to be reentrant, that is they may not use local variables without special precautions.

A dynamically linked library isn't linked to the program at compile time, but in run time.

Processes A process is something which comes from an operating system which knows the concept of multi tasking

Threads

Objects

So a thread can be a method of some object associated with a certain processes possibly stored persistently on a hard disc.

Process and thread switching

Networking basics

Efficiency essentials

Compiling or interpreting

Concurrency versus parallelism

Accessing disc and network stations

Tcl/Tk in all the above aspects

I'm off to the dunes or beach and some pc repair, so more later Don't forget to add comments and questions, that makes for better understanding.