MAIN MENU MOVED TO THE BOTTOM OF THE BLOG!

Search This Blog

Friday, December 25, 2009

Intel launches new Atom for netbooks

Intel has launched its next-generation Atom netbook processor, saying it will bring longer battery life and improved system performance to low-cost laptops. The single-core Atom N450 chip is 60 percent smaller than existing Atom processors, and consumes about 5.5 watts, 20 percent less than previous models. Netbooks with N450 chips will be shown by major vendors at the upcoming Consumer Electronics Show in Las Vegas in January. The chip integrates graphics and memory controller into the CPU, making it capable of playing 720p high-definition graphics natively. Intel also launched two processors for low-cost, small form factor desktops, the D410 and the dual-core D510.

Tuesday, December 22, 2009

Gartner Identifies the Top 10 Strategic Technologies for 2010

Analysts Examine Latest Industry Trends During Gartner Symposium/ITxpo, October 18-22, in Orlando 

ORLANDO, Fla., October 20, 2009 — 

Gartner, Inc. analysts today highlighted the top 10 technologies and trends that will be strategic for most organizations in 2010. The analysts presented their findings during Gartner Symposium/ITxpo, being held here through October 22.

Gartner defines a strategic technology as one with the potential for significant impact on the enterprise in the next three years. Factors that denote significant impact include a high potential for disruption to IT or the business, the need for a major dollar investment, or the risk of being late to adopt.

These technologies impact the organization's long-term plans, programs and initiatives. They may be strategic because they have matured to broad market use or because they enable strategic advantage from early adoption.

“Companies should factor the top 10 technologies into their strategic planning process by asking key questions and making deliberate decisions about them during the next two years,” said David Cearley, vice president and distinguished analyst at Gartner. “However, this does not necessarily mean adoption and investment in all of the technologies. They should determine which technologies will help and transform their individual business initiatives."

The top 10 strategic technologies for 2010 include:

Cloud Computing. Cloud computing is a style of computing that characterizes a model in which providers deliver a variety of IT-enabled capabilities to consumers. Cloud-based services can be exploited in a variety of ways to develop an application or a solution. Using cloud resources does not eliminate the costs of IT solutions, but does re-arrange some and reduce others. In addition, consuming cloud services enterprises will increasingly act as cloud providers and deliver application, information or business process services to customers and business partners.

Advanced Analytics. Optimization and simulation is using analytical tools and models to maximize business process and decision effectiveness by examining alternative outcomes and scenarios, before, during and after process implementation and execution. This can be viewed as a third step in supporting operational business decisions. Fixed rules and prepared policies gave way to more informed decisions powered by the right information delivered at the right time, whether through customer relationship management (CRM) or enterprise resource planning (ERP) or other applications. The new step is to provide simulation, prediction, optimization and other analytics, not simply information, to empower even more decision flexibility at the time and place of every business process action. The new step looks into the future, predicting what can or will happen.

Client Computing. Virtualization is bringing new ways of packaging client computing applications and capabilities. As a result, the choice of a particular PC hardware platform, and eventually the OS platform, becomes less critical. Enterprises should proactively build a five to eight year strategic client computing roadmap outlining an approach to device standards, ownership and support; operating system and application selection, deployment and update; and management and security plans to manage diversity.

IT for Green. IT can enable many green initiatives. The use of IT, particularly among the white collar staff, can greatly enhance an enterprise’s green credentials. Common green initiatives include the use of e-documents, reducing travel and teleworking. IT can also provide the analytic tools that others in the enterprise may use to reduce energy consumption in the transportation of goods or other carbon management activities.

 Reshaping the Data Center. In the past, design principles for data centers were simple: Figure out what you have, estimate growth for 15 to 20 years, then build to suit. Newly-built data centers often opened with huge areas of white floor space, fully powered and backed by a uninterruptible power supply (UPS), water-and air-cooled and mostly empty. However, costs are actually lower if enterprises adopt a pod-based approach to data center construction and expansion. If 9,000 square feet is expected to be needed during the life of a data center, then design the site to support it, but only build what’s needed for five to seven years. Cutting operating expenses, which are a nontrivial part of the overall IT spend for most clients, frees up money to apply to other projects or investments either in IT or in the business itself.

Social Computing. Workers do not want two distinct environments to support their work – one for their own work products (whether personal or group) and another for accessing “external” information. Enterprises must focus both on use of social software and social media in the enterprise and participation and integration with externally facing enterprise-sponsored and public communities. Do not ignore the role of the social profile to bring communities together.

Security – Activity Monitoring. Traditionally, security has focused on putting up a perimeter fence to keep others out, but it has evolved to monitoring activities and identifying patterns that would have been missed before. Information security professionals face the challenge of detecting malicious activity in a constant stream of discrete events that are usually associated with an authorized user and are generated from multiple network, system and application sources. At the same time, security departments are facing increasing demands for ever-greater log analysis and reporting to support audit requirements. A variety of complimentary (and sometimes overlapping) monitoring and analysis tools help enterprises better detect and investigate suspicious activity – often with real-time alerting or transaction intervention. By understanding the strengths and weaknesses of these tools, enterprises can better understand how to use them to defend the enterprise and meet audit requirements.

Flash Memory. Flash memory is not new, but it is moving up to a new tier in the storage echelon. Flash memory is a semiconductor memory device, familiar from its use in USB memory sticks and digital camera cards. It is much faster than rotating disk, but considerably more expensive, however this differential is shrinking. At the rate of price declines, the technology will enjoy more than a 100 percent compound annual growth rate during the new few years and become strategic in many IT areas including consumer devices, entertainment equipment and other embedded IT systems. In addition, it offers a new layer of the storage hierarchy in servers and client computers that has key advantages including space, heat, performance and ruggedness.

Virtualization for Availability. Virtualization has been on the list of top strategic technologies in previous years. It is on the list this year because Gartner emphases new elements such as live migration for availability that have longer term implications. Live migration is the movement of a running virtual machine (VM), while its operating system and other software continue to execute as if they remained on the original physical server. This takes place by replicating the state of physical memory between the source and destination VMs, then, at some instant in time, one instruction finishes execution on the source machine and the next instruction begins on the destination machine.

However, if replication of memory continues indefinitely, but execution of instructions remains on the source VM, and then the source VM fails the next instruction would now place on the destination machine. If the destination VM were to fail, just pick a new destination to start the indefinite migration, thus making very high availability possible.

The key value proposition is to displace a variety of separate mechanisms with a single “dial” that can be set to any level of availability from baseline to fault tolerance, all using a common mechanism and permitting the settings to be changed rapidly as needed. Expensive high-reliability hardware, with fail-over cluster software and perhaps even fault-tolerant hardware could be dispensed with, but still meet availability needs. This is key to cutting costs, lowering complexity, as well as increasing agility as needs shift.

Mobile Applications. By year-end 2010, 1.2 billion people will carry handsets capable of rich, mobile commerce providing a rich environment for the convergence of mobility and the Web. There are already many thousands of applications for platforms such as the Apple iPhone, in spite of the limited market and need for unique coding. It may take a newer version that is designed to flexibly operate on both full PC and miniature systems, but if the operating system interface and processor architecture were identical, that enabling factor would create a huge turn upwards in mobile application availability.

“This list should be used as a starting point and companies should adjust their list based on their industry, unique business needs and technology adoption mode,” said Carl Claunch, vice president and distinguished analyst at Gartner. “When determining what may be right for each company, the decision may not have anything to do with a particular technology. In other cases, it will be to continue investing in the technology at the current rate. In still other cases, the decision may be to test/pilot or more aggressively adopt/deploy the technology.” 

Follow Gartner
Follow news, photos and video coming from Gartner Symposium/ITxpo on Facebook at http://www.facebook.com/home.php#/Gartner?ref=ts. FriendFeed at http://friendfeed.com/rooms/gartner, on Twitter at http://twitter.com/Gartner_inc, on flickr at http://www.flickr.com/photos/27772229@N07/ and on YouTube at http://www.youtube.com/gartnervideo.

About Gartner Symposium/ITxpo

Gartner Symposium/ITxpo is the industry's largest and most important annual gathering of CIOs and senior IT executives. This event delivers independent and objective content with the authority and weight of the world's leading IT research and advisory organization, and provides access to the latest solutions from key technology providers. Gartner's annual Symposium/ITxpo events are key components of attendees' annual planning efforts. They rely on Gartner Symposium/ITxpo to gain insight into how their organizations can use IT to address business challenges and improve operational efficiency. More information can be found at www.gartner.com/us/symposium.




Thursday, December 17, 2009

Netbooks boost end of year PC sales


Sales of personal computers have increased for the first time in almost a year, report market analysts IDC.

Sales of PCs climbed 2.3% in the third quarter of 2009 according to figures compiled by the firm. Sales of mobile computers were particularly strong. 

The upward swing is a marked change from the previous three quarters when sales were down during every period. 

Saturday, December 12, 2009

Intel hopes 48-core chip will solve new challenges

SAN FRANCISCO--Pushing several steps farther in the multicore direction, Intel on Wednesday demonstrated a fully programmable 48-core processor it thinks will pave the way for massive data computers powerful enough to do more of what humans can. 

The 1.3-billion transistor processor, called Single-chip Cloud Computer (SCC) is successor generation to the 80-core "Polaris" processor that Intel's Tera-scale research project produced in 2007. Unlike that precursor, though, the second-generation model is able to run the standard software of Intel's x86 chips such as its Pentium and Core models. 

The cores themselves aren't terribly powerful--more like lower-end Atom processors than Intel's flagship Nehalem models, Intel Chief Technology Officer Justin Rattner said at a press event here. But collectively they pack a lot of power, he said, and Intel has ambitious goals in mind for the overall project.


"The machine will be capable of understanding the world around them much as humans do," Rattner said. "They will see and hear and probably speak and do a number of other things that resemble human-like capabilities, and will demand as a result very (powerful) computing capability." 

Intel is working with companies facing large-scale computing challenges that today require thousands of networked servers. That's very much a here-and-now problem compared to the more sci-fi challenges of computer vision. 

Intel's idea with the SCC and its ilk, Rattner said: "Could you replace a rack full of equipment today with one or a number of high-core count processors like the SCC?" 

The chipmaker found only one flaw with the chip so far and has booted Windows and Linux on SCC systems. The company demonstrated computers using the processor running Microsoft's Visual Studio on Windows and other tasks at the event. 

No silver bullet for parallel programming
 
The Tera-scale project doesn't fundamentally address one of the big challenges in today's computing industry, though: getting multicore chips to run today's computing jobs that are often designed to run as a single thread of instructions rather than independent tasks running in parallel. In days of yore, processor clock frequencies got steadily faster, letting single threads execute faster, but overheating issues led chip designers instead down the multicore path for trying to increase computing power. 


"This isn't a full solution," Rattner said of the programming challenge. He said that from a programmer's perspective, the SCC is similar in many ways to a server with 48 cores. 

While the chip may not have any silver bullets for the parallel programming challenge, it does have the advantage of some compatibility with existing computer designs. It can run ordinary software for Intel chips, unlike the increasingly capable graphics chips touted by Intel rivals Nvidia and Advanced Micro Devices. 

"Our thrust is to maintain the compatibility and familiarity of the Intel architecture as we move to more and more performance," Rattner said. "That's why we could bring up Windows and Linux environments with relatively little effort." 

The system is different in some ways, though, notably in its lack of cache coherency--technology that keeps data stored in each core's high-speed memory bank synchronized with the others on the chip. By contrast, Intel's Larrabee processor, a many-core x86 chip under development for graphics acceleration, is a cache-coherent design that has a large amount of real estate devoted to caching data. 

100 chips for research partners
 
Intel hopes to encourage academics and others to tackle programming challenges on the chip. To that end, Intel plans to share 100 SCC-based systems with various partners in industry and academia. 


Microsoft is one such partner. "We're very excited about this as a research vehicle," said Jim Larus, director of cloud-computing futures at Microsoft Research. 

One major feature of the SCC design is a high-speed "mesh" network that lets each of the 48 cores communicate with others or with the four linked memory controllers. The first-generation Tera-scale chip had such a network, but the second-generation mesh consumes only a third of the power and is accelerated with built-in hardware instructions for minimum communication delays, Rattner said. 

That fast communication was designed in part as a response to what Intel industry partners desired, Rattner said. "They were looking for extremely low latency--not just core to core at the chip level, but interchip as well," he said. 
Each link on the chip can carry 64 gigabytes of data per second. 

Better power management is one element of the new design. The chip cores can be switched on or off as the chip is running. 

"It's extremely clever, because it means the processor could be run in an adaptive mode. Processors could be turned on and off depending on the applications," said Jon Peddie, an analyst with Jon Peddie Research. 

Overall, the chip consumes between 25 and 125 watts, Rattner said. It's built using a manufacturing process with 45-nanometer electronics features.


It consists of 24 dual-core modules linked together. A computer based on the chip can accommodate a maximum of 64GB of memory.


The SCC is the second but not last generation of Intel's Tera-scale project. In the long run, Intel is telling programmers to brace themselves for computers with thousands of processing cores

Friday, December 11, 2009

New MSI Motherboard Pictured, P55-GD85





MSI motherboard pictured, supports USB 3.0 and SATA 6Gbps
Enlarge picture
MSI motherboard pictured, supports USB 3.0 and SATA 6Gbps
Enlarge picture
MSI motherboard pictured, supports USB 3.0 and SATA 6Gbps
Enlarge picture

About a day after another MSI motherboard was pictured, namely the 890FX-GD70, yet another MSI motherboard was spotted pictured online, by the very same site and with an equally large picture gallery for all to see. Yesterday's card was set as the upcoming high-end platform, through its implementation of six PCI Express x16 slots, but, even though the MSI P55-GD85 doesn't have that many graphics PCI Express x16 slots, it still has enough for SLI and CrissFireX setups, while also integrating support for the USB 3.0 and the SATA 6Gbps interfaces. 

MSI will become the next manufacturer to build a motherboard with USB 3.0 and SATA 6Gbps capabilities, after Gigabyte and ASUS already announced or released hardware compatible with said interfaces. The upcoming card is based on the P55 chipset and supports LGA 1156 processors. The board has an 8+2 phase DrMOS, a PLX bridge chip providing extra PCIe lanes and is capable of APS (Active Phase Switching).  

Even though it doesn't have six PCI Express slots as the MSI card pictured previously, the MSI P55-GD85 still has two, just enough for a good SLI or CrossFireX setup. The motherboard, instead, features seven SATA 3.0 Gbps ports and two ports for SATA 6Gbps. The MSI P55-GD85 is outfitted with four DDR3-2600 memory slots, which will certainly satisfy any graphics controller and/or processor used with the product. In addition, besides the obvious Power and Reset buttons, the product also has an eSATA connector, a dual Gigabit Ethernet port, two USB 3.0 ports and 7.1 channel audio.

As with all the occasions when a product is pictured ahead of time, there is no information on the pricing and availability of the MSI P55-GD85 motherboard. Most likely, the motherboard would have already been introduced by now if it was meant to come out in time for the holidays. As it is, it probably won't become official before 2010. 

source : SOFTPEDIA