Monthly Archives: april 2012

Zunicore adds GPUs to clouds

30 april 2012

Zunicore, the cloudy infrastructure division of Peer 1 Hosting, is going ceepie-geepie hybrid and making its cloud suitable for parallel supercomputing workloads that are goosed by GPU coprocessors.

The GPU-assisted cloud capacity is in beta testing now and will be opened up to commercial customers in July, a spokesperson at Zunicore tells El Reg.

Zunicore already peddles cloud capacity based on x86 coprocessors via an easy-to-use self-service portal that lets customers buy virtualized CPU, memory, disk, and network capacity on an hourly basis and pool it together. Zunicore does not offer preconfigured and static virtual server images, like Amazon with its EC2 cloud, but lets you set the virtual capacity you need. The Zunicore cloud fabric also has autoscaling to dial capacity up and down as the workload demands, which normal virtual server hosting available through its Peer 1 parent does not.

The Zunicore cloud was launched last November by Peer 1, which has 18 data centers in Canada (it is headquartered in Vancouver, British Columbia and its stock trades on the Toronto Stock Exchange), the US, and the UK. Since its launch last fall, Zunicore has added 3,700 customers to its infrastructure cloud (those numbers are through the end of March).

Like other cloud providers, Zunicore has built its fluffy infrastructure on two-socket Xeon servers, and the base hardware underlying virtual machines comes with 32GB of main memory, 400GB of local disk capacity a Gigabit Ethernet private and public network, and a 10 Gigabit Ethernet option if you want to create a virtual cluster in the pool of Zunicore machines.

Now, if you want to put an Nvidia Tesla M2050 GPU into the server, and you are willing to wait for 15 minutes for it to be configured, Zunicore can now offer you a ceepie-geepie setup. You can be billed hourly or

Bron: The Register Lees het complete artikel hier:

Bunker geeft dekking aan supercomputer KNMI

27 april 2012

Bullx B500 is veertig maal krachtiger dan oude hpc-platform

Het is een blikvanger, de blitse supercomputer van de Franse automatiseerder Bull in de computerzaal van het KNMI (Koninklijk Nederlands Meteorologisch Instituut) in De Bilt. Maar geen supercomputer zonder indrukwekkend robuuste koel- en energiemachines. Die staan in en bij de voormalige atoombunker op het terrein van het weerinstituut. Daar was wel een grondige verbouwing met het nodige hak-, breek- en denkwerk voor nodig.

Hoewel de apparatuur in de loop der jaren steeds kleiner is geworden, staat bij het KNMI een machine die nog echt voldoet aan de beeldvorming van computers in rekencentra. De rode en groene lichtjes van de compute-nodes knipperen onafgebroken achter de zilverachtige, kunstige metalen bekleding van de manshoge en metersbrede rekenautomaat. Het omhulsel van deze BullX B500 heeft wel iets weg van een computergrid. Ook het oog wil wat; laat de afwerking maar aan Fransen over.

Het KNMI heeft zijn nieuwe supercomputer van Bull afgelopen maart in gebruik genomen. Bulls ‘super’ vervangt het ruim vijf jaar oude Altix-systeem van Silicon Graphics dat niet meer voldoende rekencapaciteit had. Het KNMI zet het veertig maal krachtiger high performance computing (hpc)-systeem van Bull in voor het sneller, nauwkeuriger en complexer berekenen van weersverwachtingen en klimaatmodellen.


Toch staat het meest indrukwekkende deel van de supercomputeromgeving elders op het KNMI-terrein in De Bilt. De technische installatie voor de energievoorziening – het datacenter heeft een eigen aansluiting op het netwerk – en de koeling staat in de atoombunker. Deze uitwijkzetel met noodrantsoenen, noodaggregaten en was- en douchegelegenheden was bedoeldvoor de op Soesterberg gelegerde eenheden van de Koninklijke Luchtmacht. Die zijn inmiddels weg; de bunker is nu nog deels in gebruik als uitwijklocatie voor het RIVM uit het nabijgelegen Bilthoven. De

Bron: Computable Lees het complete artikel hier:

HP spreads Superdome vPar partitions across Integrity line

26 april 2012

Hewlett-Packard has just rolled up an update to its venerable Unix platform that gives more virtualization options to more customers and better control over virtualization hypervisors and guests running on its Itanium-based Integrity and Superdome iron.

HP is on an odyssey a a Project Odyssey in fact to bring Xeon processors to the Superdome and Integrity designs and beefing up Linux and Windows to run on these boxes as a peer to HP-UX running on Itanium motors in the same boxes. But that doesn’t mean HP is stopping investments in HP-UX.

The update to HP-UX 11i v3 is known as the March 2012 update; HP may use sub-release numbers to designate HP-UX 11i v3 updates, as it used to for many years, but if so, it does not publish them or talk about them that way anymore. The March update is shipping now and HP is starting to tell customers about it now, which is why El Reg is telling you about it now.

HP did a big update HP-UX back in September 2011, coinciding with the shipments of 32-socket, 128-core Superdome 2 servers based on the four-core “Tukwila” Itanium 9300 processors from Intel.

In addition to the new iron support, HP added a new kind of virtualization support, akin to Solaris containers and AIX workload partitions, called an HP-UX container, that is distinct from Integrity Virtual Machines (IVMs) and Virtual Partitions (vPars), and workload containers, all of which have slightly different approaches to carving up an Itanium box into baby virtual machines.

Integrity Virtual Machine are based on a hypervisor that can run multiple instances of HP-UX, Windows, or Linux on top of Itanium-based HP servers, while vPars are for HP-UX only, and specifically only on Superdome-class machines with cell board-style motherboards, and scale down to a single

Bron: The Register Lees het complete artikel hier:

IBM fires Power-powered Penguins at x86’s weak spots

23 april 2012

Big Blue has not made any huge proclamations to date, but it is not exactly a secret that the people in charge of IBM’s Systems and Software Group want the Power7 processor and its follow-ons to grab a larger share of the systems racket.

To that end, Big Blue is reviving a Linux-only variant of its Power Systems lineup with cheaper hardware and software pricing that it says gives better value on Linux workloads than an x86 setup.

The new machines, part of what will eventually be a family branded PowerLinux, are variants of existing iron that run in bare metal or virtualized mode using IBM’s homegrown PowerVM hypervisor, and which have settings in their firmware that only allow Linux to boot on the machines.

That’s right: the firmware prevents IBM’s own AIX Unix variant and IBM i proprietary operating systems from booting, Scott Handy, vice president of PowerLinux strategy and business development, tells El Reg.

This may be a counterintuitive move, making your iron and hypervisor cheaper running someone else’s operating system, but this is not the first time IBM has done such a thing, and it’s a strategy that has had a certain amount of success.

The idea, explains Handy, is not to take on x86 servers in all markets and price bands, which is not even possible since IBM’s Power processors do not support Microsoft’s Windows Server operating system. Way back in the day, the PowerPC 6XX processors did run Windows NT 3.51 for a brief somewhat shining moment, and of course Microsoft’s Power-based game consoles run a variant of the Windows OS, too.

So it is not technically impossible for Windows Server to come to Power7 chips. It is just politically and economically impossible. The best move on Big Blue’s part is to find areas where the x86 platform is weak

Bron: The Register Lees het complete artikel hier:

The Open Source Challenge in the Channel

23 april 2012

Bron: Linux and Unix Nieuws Lees het complete artikel hier:

‘I’m no visionary’: Torvalds up for $1.3m life-changing gong

20 april 2012

Open-source poster child Linus Torvalds, who kickstarted development of the Linux operating system kernel, has been nominated for the 1m Millennium Technology Prize – but says he’s “no visionary” and is surprised Linux has been so successful.

Torvalds and stem cell engineer Dr Shinya Yamanaka are finalists for the gong – one of the world’s top awards for science and handed out once every two years by the Technology Academy of Finland. The award, worth 817k ($1.3m), recognises technology innovations that improve the quality of human life.

Dr Yamanaka’s discovery of a new way to make pluripotent stem cells that did not rely on harvesting material from embryos has allowed scientists all over the world to make significant advances in drug research and biotechnology.

Torvalds started on his homemade kernel in 1991, and his work has grown into a free operating system that is used globally. It started out as a simple Intel 386/486-compatible monolithic Unix clone and was developed in the then 21-year-old student’s spare time after he grew sick of microkernel-based Minix. It gathered more and more features and ports to other architectures as contributors piled in source code under the GNU free software licence.

Although it runs on a tiny percentage of desktop computers, Linux is at the heart of tons of consumer electronics, at least half of the world’s internet-facing servers, 93 per cent of supercomputers, and is the kernel of Android – the dominant mobile operating system.

Interviewed about the contribution of Linux to the world, Torvalds – also known for developing source code management tool Git – was modest in an interview with the academy:

One of the main reasons I think Linux came to be successful in the first place was that I never had very lofty goals. The goalposts for me were always a

Bron: The Register Lees het complete artikel hier:

Cloud kan het beste open en hybride zijn

20 april 2012

JBoss Senior Solution Architect

Expert van Computable voor de topics: BPM en Cloud Computing


Steeds meer bedrijven kiezen voor cloud computing-omgevingen. De trend hierin is dat er vaker wordt gekozen voor hybride clouds, een combinatie van een on-premise en publieke cloud.

De keuze voor een cloudplatform gaat niet over een nacht ijs en het aanbod wordt steeds groter. Op lange termijn is het raadzaam te kiezen voor een platform dat open en hybride is. Dat is beter dan een gesloten cloudplatform waarin de klant te maken heeft met vendor lock-in. Een open cloudplatform heeft aanzienlijke voordelen. Het ondersteunt meer verschillende applicaties, besturingsplatformen en diensten. Dankzij de flexibiliteit van een hybride cloudplatform kunnen bedrijven of overheden snel inspelen op veranderingen in de markt en aanzienlijke kosten besparen.

IT-beslissers staan voor de keuze op welke platform ze cloudapplicaties gaan uitrollen. Een belangrijke vraag daarbij is of het cloudplatform open of gesloten van karakter moet zijn. Want een cloudplatform is niet zomaar een nieuw serverplatform. Het moet de basis vormen voor een succesvolle uitrol van cloudoplossingen en diensten. IT-beslissers zouden er goed aan doen te kiezen voor een open cloud met als basis Linux en open-source software die voor een veilig en stabiel platform zorgen. Een open cloudplatform is gebaseerd op open standaarden en werkt in tegenstelling tot gesloten varianten uitstekend met andere clouddiensten zoals van Amazon,, Google en social media software. Maar ook met `gesloten software’ zoals van Microsoft of VMware. Open cloud wordt ondersteund door hardware en infrastructuurtechnologie van onder andere HP, IBM, Dell en Cisco.

Het ideale open hybride cloudplatform is een combinatie van Linux, virtualisatie en middleware. Deze componenten bieden een solide open cloud platform dat gebaseerd is op open source met een sterke

Bron: Computable Lees het complete artikel hier:

Three questions Fusion-io’s rivals face after flash API bombshell

20 april 2012

Blocks and Files Happy days are coming for the denizens of data centre technology areas. Applications are going to get such an incredible speed boost that they will become jet-propelled, turbo-charged, super-charged. The average app will virtually be able to stop doing disk I/O ever again… well, for most things.

It’s a two-step improvement. The first step is the use of flash memory, dramatically improving storage I/O speed in terms of latency and IOPS. Latency is then cut even more as the flash memory moves from being implemented as disk-replacing SSDs in storage arrays, then as array controller flash, and then as PCIe-connected flash in servers, cutting out network links altogether.

But not quite. Even though the flash memory is now resident in the server, it is still treated as storage by the operating system. This means I/Os to it go through the operating system’s disk I/O subsystem. But there is no need for that. Instead it should be treated as a subsidiary tier of memory, with data reads and writes done in roughly the same way.

Fusion-io has developed software, featuring Auto Commit and Atomic Writes, that performs this task for Fusion’s ioMemory server flash. It is currently being made available as a developer’s SDK. A software developer called Citrusleaf, which provides NoSQL database software, has already demonstrated 400,000 transactions a second from a single database server using it at the DEMO 2012 conference yesterday.

Fusion-io’s CEO and chairman, David Flynn, said the effect of this was comparable to the transition from application software using tape as its online store to using disk drives.

Can we really imagine average application software never having to write directly to disk again? I believe we can. Disk will not go away, not at all. In fact we’ll need more of the stuff

Bron: The Register Lees het complete artikel hier:

IBM turns in profits bump despite server slump

18 april 2012

IT bellwhether IBM missed Wall Street’s revenue projections for Q1 by a smidgen but beat expectations for bottom line growth.

Revenues were up a mere three-tenths of a point, to $24.67bn, and net income up 7.1 per cent, to just a tad over $3bn.

The quarter is a carbon copy, more or less, of the final quarter of 2011, with all the same issues affecting certain IBM product lines mainframes were down, Power and x86 servers were flat, software had a nice bump, and the 40 emerging markets grew 10 per cent (as gauged in constant currency in their local markets) while the major markets were, as a group, flat at constant currency.

In a conference call with Wall Street analysts, chief financial officer Mark Loughridge said that because of the tough compares in the first quarter of 2011, when mainframe sales were up 40 per cent, the company expected for its System z machines to swoon a bit, and they did, with sales down 20 per cent and aggregate compute capacity, as measured in MIPS, down 5 per cent.

Loughridge said that in-box mainframe upgrades, where customers activate latent engines on installed machines, as well as specialty processor engine sales, which are used to accelerate Java, DB2, and XML workloads or to run Linux, were the big revenue generators on IBM’s big iron in this year’s first quarter. And, because it is not particularly expensive to turn on this latent capacity, the profitability of the System z business was up compared to a year ago.

Loughridge said that despite the fact that its Power Systems machines had flat revenues for the year, IBM believes that it continued to take market share in the Unix server racket. He added that IBM did 250 competitive replacements involving Power Systems, and that these accounted

Bron: The Register Lees het complete artikel hier:

Microsoft goes global with System Center 2012 at MMC

18 april 2012

Microsoft announced the global release of System Center 2012 at its 10th annual Microsoft Management Summit in Las Vegas this morning, and has confirmed that the server version of Windows 8 will be released as Windows Server 2012.

As with the beta, the new System Center 2012 is now available globally in standard and data center editions.

Brad Anderson, corporate vice president of the Management and Security Division, used his keynote to extoll new automation and provisioning features. Pricing structures have also been simplified down to two, he said, adding Microsoft wouldn’t “tax” increases in virtualization like others in the industry.

Standard users can run a couple of virtual operating systems for around $1,300, while the data center edition covers unlimited VMs for just over $3,600 although volume customers should expect serious discounts.

“We will shift from a highly virtualized world into cloud world,” Anderson predicted.

“The shift will happen, and as we designed System Center 2012 we had this in mind. We’ve built this in a way that allows you to manage across the entire spectrum – your physical servers, virtual servers, private and public cloud – and we’ve optimized to help you move towards cloud computing.”

As with the beta code Systems Center 2012 comes in eight chunks: App Controller, Operations Manager, Orchestrator, Service Manager, Virtual Machine Manager, Data Protection Manager, Endpoint Protection and the Configuration Manager. It supports Windows Server Hyper-V, VMWare vSphere, and Citrix XenServer and works with Server, Sun Solaris, and various Linux and Unix distributions on “commodity hardware.”

He also announced that Windows Server 8 will be officially dubbed Windows Server 2012, and it’ll be out by the end of the year. When it’s released Redmond will also put out an update to Systems Center 2012 that in conjunction will open up some new features not

Bron: The Register Lees het complete artikel hier: