This is default featured slide 1 title
This is default featured slide 2 title
This is default featured slide 3 title
This is default featured slide 4 title
 

Monthly Archives: September 2016

.

Difference about Computer Software and Hardware

While computer jargon can be hard to get to grips with, two terms that pop up extremely often in any computer discussion are the words “software” and “hardware”.

But what is the difference about computer software and hardware?

These two terms refer to the most fundamental parts of computer systems. Both of them are vital for any computer to operate, and they are also dependent on one another.

Definition of Computer Hardware and Software

When we talk about computer hardware, we mean the actual components of your computer. Such things as the computer’s motherboard, its CPU, the video card, the keyboard and mouse, these are all “hardware”.

The difference between computer software and hardware is that software refers to the coding and various programs that you have on your computer. These include your operating system (Windows etc), media players, Photoshop etc.

Purpose

Computer hardware is usually multi-purpose in that it is able to perform lots of different tasks. For instance, your computer monitor doesn’t just display images on screen; it also shows videos, widgets and text. One difference between computer software and hardware is that software is normally only designed to perform one task.

Your media player for example, is only for accessing media like movies and songs. It cannot edit photos or browse the web. The only real exception to this is the operating system itself, which is a user-friendly interface designed to let you access all the other bits of software and files stored on your PC.

System Requirements

Computer software can only function on a computer if that computer meets the system requirements needed to run it properly. Such requirements include hard drive requirements, a minimum processor speed, RAM requirements and a supported operating system.

Occasionally a piece of software will have additional requirements, and these will normally be printed on the box when you purchase it, or else it will be displayed on the website you have downloaded it from.

Bit Rate 64 Bit vs 32 Bit

The transition in computers from operating systems of 32-bits to operating systems of 64-bits has become a recent issue with hardware and software. The difference between the two kinds of operating system are that 64-bit systems can access more RAM and are able to process much bigger data chunks than the old 32-bit systems.

In order to run software that is 64-bit, it is necessary to have a 64-bit CPU along with a motherboard that is compatible with the software. Because there are many computers that don’’t have the hardware necessary to run the 64-bit versions of software, many manufacturers release both 64-bit and 32-bit versions of their programs.

Development

Both computer software and computer hardware are being constantly developed, with superior components and programs being released all the time. The development of hardware is usually focused on creating faster and more compact components through the use of new technology.

Meanwhile, developers of software are constantly striving to keep pace with these new advancements in hardware by building smoother running, better looking and more comprehensive programs. The result of this is that computer users are constantly being required to update their hardware to be able to run the latest pieces of software.

Conclusion.

I hope this article on explaining the difference between computer software and hardware has been useful for you. Obviously this website is dedicated to the hardware side of it and I encourage you to take advantage of the resources on this website.

.

Learning about computer hardware

This was the way I learned about computer hardware. In Sydney, Australia where I live we have a council cleanup. During this time, everyone puts out rubbish and their junk onto the side of the road for the council to pick up on a certain date. Included in this rubbish were computers, lots of computers. So when this council cleanup was in my area, I would jump on my bike (often with some of my brothers) and ride around looking at the junk piles.

I started to collect computers and have a look inside them. I was really interested in how these complicated machines worked. Then I started to fiddle with some of the components and learn what each one did. I wanted to upgrade my computer so I tried adding a hard drive. I didn’t know how so I searched Google or got a book from the library.

After successfully doing this, I thought that my CPU (central processing unit) was a bit slow, so I tried to change that. Now this was my embarrassing moment and one that I look back on and laugh. Now in my family we have many computers. Mum had just bought a new computer so I received her last one. She hadn’t taken any information of it yet, so she didn’t want it to break. Well her computer that she gave me had an AMD, 700MHz chip and I found in my searching through junk, an Intel 1GHz Pentium III.

I didn’t really understand computer hardware so I thought that you could just swap any CPU. So I took Mum’s AMD chip out and then tried to squeeze my Pentium III chip in its place. Of course it didn’t fit, since an Intel chip can’t go in an AMD motherboard. However, this didn’t stop me. I started to bend the pins on the Intel chip so that it would fit in the AMD motherboard and eventually it did. I turned it on and then smelt something burning and then I saw something burning.

I had completely blown up everything in the computer. My mum wasn’t happy as she lost her information and my computer (my first computer) didn’t work anymore. What a learning experience!

.

Ways Computers Work

So how do computers work?

Have you ever just looked at a computer and go “how does that work”?  If you think about it, computers are quite amazing.  You press a few buttons and you can talk to your friend over the other side of the world, you can learn anything you want on the internet, you can listen to music, watch TV, write stories, make videos and do much much more.

The more you think about it the more amazing it is and the more crazy it is that a bunch of computer bits can do this.  So how do computers work?

Well I will try and answer this the best that I can.  Basically a computer can consist of two broad categories, hardware and software and the way a computer works is between these two working together. I will oultine them both briefly below.

Computer Hardware

If you have been reading any of this website, then you probably already understand what computer hardware is, because that is what this website is about. Like seriously, www.computer-hardware-explained.com

So briefly: computer hardware is the physical computer that you actually see, this includes: the computer, monitor, keyboard, mouse, printer, speakers etc.  Inside the computer is more hardware, such as the hard drive, CPU (central processing unit), motherboard, RAM (random access memory) and more.

.

Powerful Supercomputer Inched Toward Exascale

In June, the ranks of the Top500 list were rearranged, and the title of world’s most powerful supercomputer was handed off to a new machine—China’s Sunway TaihuLight.

The Wuxi-based machine can perform the Linpack Benchmark—a long-standing arbiter of supercomputer prowess—at a rate of 93 petaflops, or 93 quadrillion floating-point operations per second. This performance is more than twice that of the previous record holder, China’s Tianhe-2. What’s more, TaihuLight achieves this capacity while consuming 2.4 megawatts less power than Tianhe-2.

Such efficiency gains are important if supercomputer designers hope to reach exascale operation, somewhere in the realm of 1,000 Pflops. Computers with that capability could be a boon for advanced manufacturing and national security, among many other applications. China, Europe, Japan, and the United States are all pushing toward the exascale range. Some countries are reportedly setting their sights on doing so by 2020; the United States is targeting the early 2020s. But two questions loom over those efforts: How capable will those computers be? And can we make them energy efficient enough to be economical?

We can get to the exascale now “if you’re willing to pay the power bill,” says Peter Kogge, a professor at the University of Notre Dame. Scaling up a supercomputer with today’s technology to create one that is 10 times as big would demand at least 10 times as much power, Kogge explains. And the difference between 20 MW and 200 MW, he says, “is the difference [between having] a substation or a nuclear power plant next to you.”

Kogge, who led a 2008 study on reaching the exascale, is updating power projections to cover the three categories of supercomputers built today: those with “heavyweight” high-performance CPUs; those that use “lightweight” microprocessors that are slower but cooler, and so can be packed more densely; and those that take advantage of graphics processing units to accelerate computation.

TaihuLight follows the lightweight approach, and it has made some sacrifices in pursuit of energy efficiency. Based on its hardware specs, TaihuLight can, in theory, crunch numbers at a rate of 125 Pflops. The machine reaches 74 percent of this peak theoretical capacity when running Linpack. But it does not fare as well on a new alternative benchmark, High Performance Conjugate Gradients (HPCG), which is designed to reflect how well a computer can perform more memory- and communications-intensive, real-world applications. When it runs HPCG, TaihuLight utilizes just 0.3 percent of its theoretical peak abilities, which means that only 3 out of every 1,000 possible floating-point operations are actually used by the computer. By comparison, Tianhe-2 and the United States’ Titan, the second- and third-fastest supercomputers in the Top500 rankings, respectively, can take advantage of just over 1 percent of their computing capacity. Japan’s K computer, currently ranked fifth on the list, achieved 4.9 percent with the HPCG metric.

“Everything is a balancing act,” says Jack Dongarra, a professor at the University of Tennessee, Knoxville, and one of the organizers of the Top500. “They produced a processor that can deliver high arithmetic performance but is very weak in terms of data movement.” But he notes that the TaihuLight team has developed applications that take advantage of the architecture; he says that three projects that were finalists for this year’s ACM Gordon Bell Prize, a prestigious supercomputing award, were designed to run on the machine.

TaihuLight uses DDR3, an older, slower memory, to save on power. Its architecture also uses small amounts of local memory near each core instead of a more traditional memory hierarchy, explains John Goodacre, a professor of computer architectures at the University of Manchester, in England. He says that while today’s applications can execute between 1 and 10 floating-point operations for every byte of main memory accessed, that ratio needs to be far higher for applications to run efficiently on TaihuLight. The design cuts down on a big expense in a supercomputer’s power budget: the amount of energy consumed shuttling data back and forth.

“I think what they’ve done is build a machine that changes some of the design rules that people have assumed are part of the requirements” for moving toward the exascale, Goodacre says. Further progress will depend, as the TaihuLight team has shown, on end-to-end design, he says. That includes looking not only at changes to hardware—a number of experts point to 3D stacking of logic and memory—but also to the fundamental programming paradigms we use to take advantage of the machines.