Newest 64-Bit Single Board Computer

By Kuristosu Dei
AMBLER, Pa. - September 2007 -- MEN Micro Inc., a world-renown provider of embedded computing and I/O solutions for demanding industrial, mobile and harsh environment applications, has expanded its series of intercompatible CompactPCI/-Express single board computers (SBCs) with a new 64-bit board based on the Intel® Core™2 Duo Processor T7500 combined with the Mobile Intel 965GM Express Chipset.

Introduced at the Embedded Solutions Conference in Boston, the new F18 SBC offers excellent graphics performance, making it ideal for a host of industrial applications such as monitoring, visualization or control computers as well as applications in test and measurement. Its specially developed heat sink and soldered components against shock and vibration equip the F18 for use in mobile applications as well.

Stephen Cunha, sales director of MEN Micro's US subsidiary, noted, “In the first booked orders for the F18, we have already seen a diverse set of users responding to this new SBC. Customers requiring high computing and graphics for quality control in steel roller mills to those handling intense data management in power generation applications have all seen the benefits of this new functionality.”

“MEN Micro's live demo at the Embedded Systems Conference is showcasing how the Intel® Core™2 Duo processor can be spaced efficiently and robustly packaged, and why low power dissipation is so important,” said Troy Smith, program director, Intel® Communications Alliance, Intel.

Equipped with Intel's T7500 Core™2 Duo processor with a frequency of 2.2 GHz, (Intel Core™2 Duo Processors L7500 or U7500 also available), the F18 is the newest and highest performing member of the Intel®-based CompactPCI 3U product family by MEN Micro. The 64-bit board provides a 667/800-MHz frontside bus and high performance graphics, which accelerate above all digital image processing such as CAD tools, 2D/3D modeling, video and rendering applications as well as scientific data processing.

The F18 is a 32-bit/33 MHz system slot or stand-alone board and needs only one slot on the CompactPCI bus. In combination with a PCI Express side card, the F18 can also be used as a system slot board in CompactPCI Express systems. Thanks to a fast, soldered 4GB DDR2 SDRAM, a CompactFlash slot and a SATA hard disk slot (on the side card) the F18 offers abundant memory space for a variety of graphic- and data-intense applications.

Standard I/O on the front panel of the F18 includes VGA graphics, two Gigabit Ethernet ports connected via PCI Express as well as two USB 2.0 interfaces. Additional I/O is available on different side cards and includes DVI, audio, additional USB interfaces, UART interfaces and FireWire. Different watchdogs for monitoring the processor and board temperature as well as rear I/O support complete the functionality of the F18, which enters the market with board support packages for Windows, Linux and VxWorks.

Equipped for long life support, the F18 has a guaranteed minimum availability of five years.

With the introduction of the F18, there are now four different MEN Micro boards that can be equipped with 16 different processor types ranging from Pentium®M and Celeron® M processors, Intel Core™ Duo and Intel® Core™2 Duo processors that distinguish themselves by using the same front connectors and the same side cards for additional I/O functions. Hardware compatibility enables easy migration to the next processor generation without software or system adaptations, providing an availability of more than 10 years.

Price including 4 GB system memory starting from $3,719 for single units. Delivery is four weeks ARO.

For additional information, visit www.men.de/products/default.asp?prod=02F018- or contact Stephen Cunha, MEN Micro, Inc., 24 North Main Street, Ambler, PA 19002;

Phone: 215-542-9575; Fax: 215-542-9577; E-mail: Stephen.Cunha@menmicro.com.

For a high resolution photo, please visit www.simongroup.com/PressRoom/menmicro2.html.

READER SERVICE INQUIRIES: Please forward all reader service inquiries to Stephen Cunha, MEN Micro, Inc., 24 North Main Street, Ambler, PA 19002;

E-mail: Stephen.Cunha@menmicro.com.

About MEN Micro, Inc.

Since 1982, MEN Micro has focused on innovation, reliability and flexibility to develop standard and custom board-level solutions that are used throughout industrial embedded applications and harsh environments.

The company provides a robust offering of highly reliable embedded computing and I/O solutions that employ the highest technology and innovations. In addition, MEN Micro offers exceptional custom development and environmental qualification services in accordance with industry standards. The company's standard product range contains more than 100 different computer boards and systems with corresponding BIOS, BSP and driver software based on different PowerPC and Pentium platforms. The company has more than 180 employees and is a member of several industry associations, consortiums and alliances.

Source: MEN Micro
 

A chance.....

By Kuristosu Dei
 

A Brief History of C++

By Kuristosu Dei
Computer languages have undergone dramatic evolution since the first electronic computers were built to assist in telemetry calculations during World War II. Early on, programmers worked with the most primitive computer instructions: machine language. These instructions were represented by long strings of ones and zeroes. Soon, assemblers were invented to map machine instructions to human-readable and -manageable mnemonics, such as ADD and MOV.

In time, higher-level languages evolved, such as BASIC and COBOL. These languages let people work with something approximating words and sentences, such as Let I = 100. These instructions were translated back into machine language by interpreters and compilers. An interpreter translates a program as it reads it, turning the program instructions, or code, directly into actions. A compiler translates the code into an intermediary form. This step is called compiling, and produces an object file. The compiler then invokes a linker, which turns the object file into an executable program.

Because interpreters read the code as it is written and execute the code on the spot, interpreters are easy for the programmer to work with. Compilers, however, introduce the extra steps of compiling and linking the code, which is inconvenient. Compilers produce a program that is very fast each time it is run. However, the time-consuming task of translating the source code into machine language has already been accomplished.

Another advantage of many compiled languages like C++ is that you can distribute the executable program to people who don't have the compiler. With an interpretive language, you must have the language to run the program.

For many years, the principle goal of computer programmers was to write short pieces of code that would execute quickly. The program needed to be small, because memory was expensive, and it needed to be fast, because processing power was also expensive. As computers have become smaller, cheaper, and faster, and as the cost of memory has fallen, these priorities have changed. Today the cost of a programmer's time far outweighs the cost of most of the computers in use by businesses. Well-written, easy-to-maintain code is at a premium. Easy- to-maintain means that as business requirements change, the program can be extended and enhanced without great expense.
Programs

The word program is used in two ways: to describe individual instructions, or source code, created by the programmer, and to describe an entire piece of executable software. This distinction can cause enormous confusion, so we will try to distinguish between the source code on one hand, and the executable on the other.
New Term: A program can be defined as either a set of written instructions created by a programmer or an executable piece of software.

Source code can be turned into an executable program in two ways: Interpreters translate the source code into computer instructions, and the computer acts on those instructions immediately. Alternatively, compilers translate source code into a program, which you can run at a later time. While interpreters are easier to work with, most serious programming is done with compilers because compiled code runs much faster. C++ is a compiled language.
Solving Problems

The problems programmers are asked to solve have been changing. Twenty years ago, programs were created to manage large amounts of raw data. The people writing the code and the people using the program were all computer professionals. Today, computers are in use by far more people, and most know very little about how computers and programs work. Computers are tools used by people who are more interested in solving their business problems than struggling with the computer.

Ironically, in order to become easier to use for this new audience, programs have become far more sophisticated. Gone are the days when users typed in cryptic commands at esoteric prompts, only to see a stream of raw data. Today's programs use sophisticated "user-friendly interfaces," involving multiple windows, menus, dialog boxes, and the myriad of metaphors with which we've all become familiar. The programs written to support this new approach are far more complex than those written just ten years ago.

As programming requirements have changed, both languages and the techniques used for writing programs have evolved. While the complete history is fascinating, this book will focus on the transformation from procedural programming to object-oriented programming.
Procedural, Structured, and Object-Oriented Programming

Until recently, programs were thought of as a series of procedures that acted upon data. A procedure, or function, is a set of specific instructions executed one after the other. The data was quite separate from the procedures, and the trick in programming was to keep track of which functions called which other functions, and what data was changed. To make sense of this potentially confusing situation, structured programming was created.

The principle idea behind structured programming is as simple as the idea of divide and conquer. A computer program can be thought of as consisting of a set of tasks. Any task that is too complex to be described simply would be broken down into a set of smaller component tasks, until the tasks were sufficiently small and self-contained enough that they were easily understood.

As an example, computing the average salary of every employee of a company is a rather complex task. You can, however, break it down into these subtasks:
1. Find out what each person earns.

2. Count how many people you have.

3. Total all the salaries.

4. Divide the total by the number of people you have.

Totaling the salaries can be broken down into
1. Get each employee's record.

2. Access the salary.

3. Add the salary to the running total.

4. Get the next employee's record.

In turn, obtaining each employee's record can be broken down into
1. Open the file of employees.

2. Go to the correct record.

3. Read the data from disk.

Structured programming remains an enormously successful approach for dealing with complex problems. By the late 1980s, however, some of the deficiencies of structured programing had became all too clear.

First, it is natural to think of your data (employee records, for example) and what you can do with your data (sort, edit, and so on) as related ideas.

Second, programmers found themselves constantly reinventing new solutions to old problems. This is often called "reinventing the wheel," and is the opposite of reusability. The idea behind reusability is to build components that have known properties, and then to be able to plug them into your program as you need them. This is modeled after the hardware world--when an engineer needs a new transistor, she doesn't usually invent one, she goes to the big bin of transistors and finds one that works the way she needs it to, or perhaps modifies it. There was no similar option for a software engineer.
New Term: The way we are now using computers--with menus and buttons and windows--fosters a more interactive, event-driven approach to computer programming. Event-driven means that an event happens--the user presses a button or chooses from a menu--and the program must respond. Programs are becoming increasingly interactive, and it has became important to design for that kind of functionality.

Old-fashioned programs forced the user to proceed step-by-step through a series of screens. Modern event-driven programs present all the choices at once and respond to the user's actions.

Object-oriented programming attempts to respond to these needs, providing techniques for managing enormous complexity, achieving reuse of software components, and coupling data with the tasks that manipulate that data.

The essence of object-oriented programming is to treat data and the procedures that act upon the data as a single "object"--a self-contained entity with an identity and certain characteristics of its own.
C++ and Object-Oriented Programming

C++ fully supports object-oriented programming, including the four pillars of object-oriented development: encapsulation, data hiding, inheritance, and polymorphism. Encapsulation and Data Hiding When an engineer needs to add a resistor to the device she is creating, she doesn't typically build a new one from scratch. She walks over to a bin of resistors, examines the colored bands that indicate the properties, and picks the one she needs. The resistor is a "black box" as far as the engineer is concerned--she doesn't much care how it does its work as long as it conforms to her specifications; she doesn't need to look inside the box to use it in her design.

The property of being a self-contained unit is called encapsulation. With encapsulation, we can accomplish data hiding. Data hiding is the highly valued characteristic that an object can be used without the user knowing or caring how it works internally. Just as you can use a refrigerator without knowing how the compressor works, you can use a well-designed object without knowing about its internal data members.











Title : Super PI Ver1.1e (calculation of pi up to 33.55 million digits)
Keywords: PI MATH WINDOWS

In August 1995, the calculation of pi up to 4,294,960,000 decimal digits
was succeeded by using a supercomputer at the University of Tokyo. The
program was written by D.Takahashi and he collaborated with Dr. Y.Kanada
at the computer center, the University of Tokyo. This record should be
the current world record. ( Details is shown in the windows help. )
This record-breaking program was ported to personal computer environment
such as Windows NT and Windows 95. In order to calculate 33.55 million
digits, it takes within 3 days with Pentium 90MHz, 40MB main memory and
340MB available storage.







What is Overclocking
Overclocking in simple term means running your computer's CPU at a speed higher than what was intended by the manufacturer. Overclocking is increasing the clock rate of a processor beyond its rating for the purpose of increasing system speed without buying a new, faster, but more expensive processor.

"Overclocking" is a slang term, and not an engineering or scientific term. The correct technical terms are "speed-margining" (more common) and "undertiming" (less common). One can also "overclock" the computer's bus. The 'overclocking' describes the process of running your CPU at a clock and/or bus speed that the CPU hasn't been specified for - logically, that speed is usually higher.

The tempting idea behind overclocking is to increase system performance at very little or no cost. In many cases you only need to change a few settings on your motherboard to make your system run faster. In other cases you only have to add a few components (usually for cooling) to achieve the performance increase.

In the past, overclocking was usually nothing more than increasing a CPU's clock speed to that of the next higher model, e.g. a Pentium 120 to a Pentium 133. Now, with new bus speeds available on several motherboards, you can change the clock and bus speed of a CPU to values that don't officially exist. This new way of overclocking is yielding an even higher performance increase than the classic one. It even gives you the ability to increase the performance of the fastest model of a particular CPU production line (e.g. P200 to 250 MHz, Pentium Pro 200 to 233 MHz).