This is Lite Plone Theme
You are here: Home News

News

System related news items.

Please subscribe to our RSS news feed.

COSMOS nominated for 2015 Annual HPCwire Readers’ Choice Awards

Since 1986, HPCwire has been the world’s top information resource for high performance computing. Every year its readers vote to recognise the most outstanding individuals and organisations in the industry. We are very pleased to say that this year the CTC and COSMOS have been nominated in the following categories:

Q8 – Best use of High Performance Data Analytics

Stephen Hawking Centre for Theoretical Cosmology, Cambridge University, uses the first Intel Xeon Phi-enabled SGI UV2000 with its co-designed "MG Blade" Phi-housing and achieved 100X speed-up of MODAL code to probe the Cosmic Background Radiation with optimizations in porting the MODAL to the Intel Xeon Phi coprocessor.

Q16 – Best Data-Intensive System (End User Focused)

COSMOS supercomputing facility at the Stephen Hawking Centre for Theoretical Cosmology, Cambridge University, which uses of the world’s first Intel Xeon Phi-enabled SGI UV2000 with its co-designed "MG Blade" Phi-housing.

Q18 – Best HPC Collaboration Between Academia & Industry

SGI/Intel long-term collaboration with COSMOS supercomputing facility at the Stephen Hawking Centre for Theoretical Cosmology which among other things achieved a 100x speedup of cosmology code.

If you have been a user of COSMOS and it has benefitted your research in cosmology, astrophysics or particle physics, we would be very grateful if you could go to the HPCwire website and vote for us in the above categories. The voting can be accessed here Please note that voting for your own organisation is not allowed.

Many thanks.

New insights found in black hole collisions

New insights found in black hole collisions

Illustration of two rotating black holes in orbit. Graphic by Midori Kitagawa.

New research provides revelations about the most energetic event in the universe — the merging of two spinning, orbiting black holes into a much larger black hole.

An international team of astronomers, including from the University of Cambridge, have found solutions to decades-old equations describing what happens as two spinning black holes in a binary system orbit each other and spiral in toward a collision.

The results, published in the journal Physical Review Letters, should significantly impact not only the study of black holes, but also the search for elusive gravitational waves – a type of radiation predicted by Einstein's theory of general relativity – in the cosmos.

Unlike planets, whose average distance from the sun does not change over time, general relativity predicts that two black holes orbiting around each other will move closer together as the system emits gravitational waves.

"An accelerating charge, like an electron, produces electromagnetic radiation, including visible light waves," said Dr Michael Kesden of the University of Texas at Dallas, the paper's lead author. "Similarly, any time you have an accelerating mass, you can produce gravitational waves."

The energy lost to gravitational waves causes the black holes to spiral closer and closer together until they merge, which is the most energetic event in the universe, after the big bang. That energy, rather than going out as visible light, which is easy to see, goes out as gravitational waves, which are much more difficult to detect.

While Einstein's theories predict the existence of gravitational waves, they have not been directly detected. But the ability to 'see' gravitational waves would open up a new window to view and study the universe.

Optical telescopes can capture photos of visible objects, such as stars and planets, and radio and infrared telescopes can reveal additional information about invisible energetic events. Gravitational waves would provide a qualitatively new medium through which to examine astrophysical phenomena.

"Using gravitational waves as an observational tool, you could learn about the characteristics of the black holes that were emitting those waves billions of years ago, information such as their masses and mass ratios, and the way they formed" said co-author and PhD student Davide Gerosa, of Cambridge's Department of Applied Mathematics and Theoretical Physics. "That's important data for more fully understanding the evolution and nature of the universe."

Later this year, upgrades to the Laser Interferometer Gravitational-Wave Observatory (LIGO) in the US and VIRGO in Europe will be completed, and the first direct measurements of gravitational waves may be just around the corner. Around the same time, the LISA Pathfinder mission will be launched as a test mission for establishing a gravitational wave detector of unprecedented sensitivity in space.

"The equations that we solved will help predict the characteristics of the gravitational waves that LIGO would expect to see from binary black hole mergers," said co-author Dr Ulrich Sperhake, who, along with Gerosa, is also a member of Cambridge's Centre for Theoretical Cosmology. "We're looking forward to comparing our solutions to the data that LIGO collects."

The equations the researchers solved deal specifically with the spin angular momentum of binary black holes and a phenomenon called precession.

"Like a spinning top, black hole binaries change their direction of rotation over time, a phenomenon known as procession," said Sperhake. "The behaviour of these black hole spins is a key part of understanding their evolution."

Just as Kepler studied the motion of the earth around the sun and found that orbits can be ellipses, parabola or hyperbolae, the researchers found that black hole binaries can be divided into three distinct phases according to their rotation properties.

GRChombo, a new numerical general relativity code with adaptive mesh refinement

We have created a new code to solve the Einstein equations of general relativity numerically called GRChombo. The novel feature of this code compared to other existing codes is that it has block-structured adaptive mesh refinement. This unique feature allows us to accurately resolve widely separated length scales that may appear in different parts of the computational domain whilst keeping the cost of the computation under control. For instance, in a black hole binary, one has to be able to resolve the region far from the black holes (wave zone) as well as the length scales associated to the orbital motion, and finally the small scales associated to the dynamics of each of the black holes. This unique capability of GRChombo should enable us to disentangle the physics hidden in the non-linearities of Einstein’s theory of gravity.

The COSMOS supercomputer has proved key in the development of this new code. Indeed, in order to solve the Einstein equations accurately enough for physical systems of interest, such as black hole binaries, we need to use large enough computational domains with several levels of refinement. With the valuable continued support from Juha Jaykka and James Briggs, COSMOS shared-memory architecture has been very effective in terms of exploiting that massive parallelism that GRChombo is capable of.

GRChombo is now at a stage in which we can start to explore new physics. In particular, we are going to use this new code to probe strong gravity signatures in cosmology, to understand the non-linear dynamics in modified theories of gravity, to determine the endpoints of black hole instabilities and test the cosmic censorship conjecture, and to extract the dynamics of strongly interacting gauge theories via the gauge/gravity correspondence. While we carry out this exciting scientific programme, we will continue to optimise this code to run on COSMOS with the help of Juha and James.

GRChombo is based on Chombo, which is a publicly available software developed by the Lawrence Berkley National Laboratory. The current GRChombo team consists of: Katy Clough (King’s College London), Pau Figueras (DAMTP, Cambridge), Hal Finkel (Argonne National Laboratory), Makus Kunesch (DAMTP, Cambridge), Eugene Lim (King’s College London) and Saran Tunyasuvunakool (DAMTP, Cambridge). For more information, visit our website: http://grchombo.github.io or check out our paper (arXiv:1503.03436).

COSMOS allows us to peer backward to the Big Bang

The COSMOS supercomputer is helping us to push back our understanding of the universe to the first moments after the Big Bang. The Planck satellite has yielded the highest precision measurements so far of the cosmic microwave background (CMB) radiation – the radiation left over from the Big Bang – but analysing the Planck data is a task so massive it can only be done on supercomputers like COSMOS.

Unlike many supercomputers, which are in fact clusters of smaller systems networked together, COSMOS is a single system. This distinction is vital when it comes to software code development. COSMOS’s flexible shared-memory architecture is ideal for this purpose. It allows our researchers to focus on innovative codes first and develop efficient parallelism in their software while proving their theories. They can go from working on their laptop to COSMOS much more easily than programming for a large, distributed system, where the parallelism of the code becomes much more critical to get their applications to work.

COSMOS has proved essential for our work with the CMB, particularly the Planck satellite maps of the entire sky. Recent analysis of CMB observations confirms predictions that a period of enormously fast exponential expansion, which cosmologists call inflation, occurred in the early universe. During inflation, very small changes, or quantum fluctuations, were imprinted into the fabric of space-time. These later became the seeds for the development of all the structures we now see in the universe. Establishing the fundamental character of these fluctuations would offer vital clues about how the universe emerged out of inflation, one of the most important goals in fundamental science. So, supercomputers like COSMOS are critical to our understanding of the earliest times in the universe.

Another major project being undertaken on COSMOS is the study of spectroscopic signatures of particular molecules in exoplanet atmospheres (the ExoMol project). These signatures can help in the identification of exoplanets and whether or not there is extra-terrestrial life there.

More information about this story can be found here: http://www.hpcwire.com/2014/10/10/peering-backward-big-bang-ctc-cosmos/

ExoMol: http://www.exomol.com/

COSMOS becomes an Intel Parallel Computing Centre

COSMOS has been award IPCC status by Intel.

Cambridge’s COSMOS supercomputer, the largest shared-memory computer in Europe, has been named by computer giant Intel as one of its Parallel Computing Centres, building on a long-standing collaboration between Intel and the University of Cambridge.

The COSMOS facility, which is located in the Stephen Hawking Centre for Theoretical Cosmology (CTC) at the University, is dedicated to research in cosmology, astrophysics and particle physics. It was switched on in 2012.

To date, the facility has been used to simulate the dynamics of the early Universe and for pipelines analysing the statistics of Planck satellite maps of the cosmic microwave sky. The COSMOS supercomputer was the first very large (over 10 terabyte) single-image shared-memory system to incorporate Intel Xeon Phi coprocessors, which are behind the most power-efficient computers in the world.

Intel Parallel Computing Centres (IPCC) are universities, institutions, and labs that are leaders in their field. The centres are focusing on modernising applications to increase parallelism and scalability through optimisations that leverage cores, caches, threads, and vector capabilities of microprocessors and coprocessors.

As an IPCC, the COSMOS research facility will receive enhanced Intel support from its applications and engineering teams, as well as early access to future Intel Xeon Phi and other Intel products aimed at high-performance computing. IPCC status will allow COSMOS to better focus on delivering computing advances to the scientific community it serves and also highlight the efforts Intel has put into advancing high-performance computing.

COSMOS becomes an Intel Parallel Computing Centre

When operating at peak performance, the COSMOS Supercomputer can perform 38.6 trillion calculations per second (TFLOPS), and is based on SGI UV2000 systems with 1856 cores of Intel Xeon processors E5-2600, 14.8 TB RAM and 31 Intel® Xeon PhiTM coprocessors.

The research centre has already developed Xeon Phi for use in Planck Satellite analysis of the cosmic microwave sky and for simulations of the very early Universe. These capabilities will become even more important in the near future pending the arrival of new generations of Intel Xeon Phi coprocessors and associated technologies.

I am very pleased that the COSMOS supercomputer centre has been selected among the vanguard of Intel Parallel Computing Centres worldwide,” said Professor Stephen Hawking, founder of the COSMOS Consortium. “These are exciting times for cosmology as we use COSMOS to directly test our mathematical theories against the latest observational data. Intel’s new technology and this additional support will accelerate our scientific research.”

“Building on COSMOS success to date with Intel’s Many Integrated Core-based technology, our new IPCC status will ensure we remain at the forefront of those exploiting many-core architectures for cosmological research,” said COSMOS director, Professor Paul Shellard. “With the SGI UV2 built around Intel Xeon processors E5-2600 family and Intel Xeon Phi processors, we have a flexible HPC platform on which we can explore Xeon Phi acceleration using distributed, offload and shared-memory programming models. Intel support will ensure fast code development timescales using MICs, enhancing COSMOS competitiveness and discovery potential.”

“Intel Parallel Computing Centres are collaborations to modernise key applications to unlock performance gains that come through parallelism, enabling the way for the next leap in discovery. We are delighted to be working with the COSMOS team in this endeavour as they strive to understand the origins of the universe,” said Stephan Gillich, Director Technical Computing, Intel EMEA.

Programmer effort will be targeted strategically, first, to ensure that ambitious science goals are achieved in a timely fashion and, secondly, to prepare for future supercomputer architectures, identifying needs and opportunities, while developing appropriate consortium benchmarks for machine evaluation. Scientific codes of importance to the research community, which we will tackle over the next two years, include:

Planck satellite scienceThe Planck satellite, which has surveyed the cosmic microwave sky to unprecedented precision, offers the finest cosmological dataset available, with COSMOS researchers playing a leading role in science exploitation. We have two main goals: First, we will optimise the community cosmological parameter estimation code CAMB/COSMOMC (and/or similar codes) for Xeon Phi. Secondly, we will port key components of the non-Gaussian statistics pipeline MODAL, enhancing discovery potential and opening up the efficient use of non-Gaussian data for parameter estimation and inflationary model surveys.

Early universe simulationsWe will advance lattice codes recreating the Big Bang using increasingly realistic simulations of inflation and the Standard Model. Using the successful experience from the WALLS code (cf. Intel White Paper), we will create public library routines and examples for complex 3D lattice codes.

Billion galaxy surveysWe will advance the analysis of Dark Energy Survey and other galaxy survey data by accelerating the creation of mock galaxy catalogues essential for science exploitation. This will be achieved by optimising simplified N-body (particle-mesh) simulation codes, such as the grid-based PICASO code. We will also improve post-processing pipelines, including non-Gaussian statistical analysis codes.

Black Hole/Gravity adaptive mesh codes – We will port 3D adaptive mesh codes which incorporate fully general relativistic effects to Xeon Phi. We will apply expertise to GR black hole codes relevant for gravitational wave experiments (LIGO). Using these advances, we will endeavour to optimise key components of AMR hydrodynamic galaxy formation codes (such as RAMSES and AREPO)

COSMOS is part of the Distributed Research utilising Advanced Computing (DiRAC) facility, funded by the Science & Technology Facilities Council and the Department of Business Innovation and Skills.

Members of the Centre for Theoretical Cosmology overseeing the COSMOS IPCC:

James Briggs, COSMOS Parallel Programmer
Dr Juha Jaykka, COSMOS System Manager
Professor Paul Shellard (PI), Director, Centre for Theoretical Cosmology
Dr Ulrich Sperhake, Lecturer in Theoretical Physics

Together with external members of the COSMOS Consortium Executive Committee

Prof Brad Gibson, Professor of Astrophysics, University of Central Lancashire
Prof Martin Haehnelt, Professor of Astronomy, University of Cambridge
Prof Mark Hindmarsh, Professor of Physics, University of Sussex
Professor Andrew Liddle, Professor of Astronomy, University of Edinburgh
Dr Eugene Lim, Lecturer in Physics, Kings College, London University
Prof Will Percival, Professor of Astronomy, University of Portsmouth
Prof Arttu Rajantie, Professor of Physics, Imperial College, London University

University of Cambridge press release:

http://www.cam.ac.uk/research/news/uks-cosmos-supercomputing-research-facility-becomes-an-intel-parallel-computing-centre

Other links to the story:

http://phys.org/news/2014-06-cosmos-supercomputing-facility-intel-parallel.html

http://www.scientificcomputing.com/news/2014/06/cosmos-becomes-intel-parallel-computing-center

http://www.hpcwire.com/off-the-wire/cosmos-award-ipcc-status-intel