Field of Interest:cs
Experiments:MAST, JET, ITER
Deadline: 2017-06-21
Region: Europe
Job description:
Data Centric & High Performance Computing Experts
Job type:Permanent Full Time
Location:Culham, Oxfordshire
Salary:£44,574 to £49,834 (Including Market Premium allowance dependent on experience) plus industry leading benefits including an outstanding pension scheme.
Closing date:21st June 2017.
Fusion, the process that powers the Sun and the stars, could play a significant part in the world`s carbon-free energy portfolio. CCFE is a leading fusion research laboratory, specialising in Magnetic Confinement Fusion (MCF). Our scientists and engineers are working with international partners from industry and academia to develop fusion as a new source of clean energy for tomorrow`s power stations. CCFE is home to MAST, a novel fusion device and flagship of the UK fusion programme, together with the European experiment JET, currently the world`s largest MCF device. In addition, we are key partners in the 16B Euro ITER experiment under construction in the South of France (a device which when fully operational is expected to generate up to 2PB of data per day).
The delivery of world-class fusion science relies upon an array of complex data delivery and computing systems that enable routine, fast access to data resources, together with access to High Performance and High Throughput computing systems, irrespective of whether scientists are at Culham or half way around the world. Advances in data technologies, the development of core ITER relevant infrastructures, and imminent challenges in the era of Big-Data and Exascale computing require further development of our data delivery and scientific computing infrastructure.
We are seeking high calibre developers and data scientists to join our team and work with our national and international partners to successfully deliver projects based upon data delivery, data object storage, and data centric high performance computing. As a highly skilled Research Software Engineer or Data Scientist you will be part of a focused project team with responsibility for innovation and development of CCFE`s computing infrastructure. You will play a key role in systems development, testing, implementation and technical support, focused upon the ITER era of Big Data processing. You will be expected to work with international partners from the academic and industrial world at the frontier of data centric and high performance computing, to give presentations and represent the UKAEA at at national and international events, publishing your work in the academic press where appropriate. For those willing to take on a challenge, this role offers an exciting opportunity to develop a career at the frontier of computing technology and to steer the fusion community`s infrastructure into the ITER era of Big Data and Extreme Scale Computing.
Knowledge skills and experience
Essential
(Candidates will have most, if not all of the following):
- Relevant scientific/engineering or computing degree
- A deep knowledge of the Research Software Lifecycle and associated infrastructure
- Demonstrable experience of at least one high level programming language (C++/C, Fortran etc.) as well as high level scripting languages
- Expertise in UNIX or Linux operating systems, computer cluster management experience and knowledge of virtualisation technologies, particularly cloud computing services or container technology (ideally OpenStack)
- Experience of having worked with networked data infrastructure
- Knowledge of, or ideally experience of having worked within the international fusion community
(Note, a background in computing/modelling within the High Energy Physics arena should deliver most if not all of the skills required by the advertised posts as the HPC/HTC challenges of fusion and HEP are very well aligned).
Desirable
- Experience of cluster-computing framework/Big Data anlytics technology such as Apache Spark/Flink etc.
- An interest or background in Machine Learning (e.g. Tensor Flow), advanced analysis methodologies (e.g. Bayesian Inference based analysis), Uncertainty Quantification techniques etc.
- Experience or interest in provenance capture technology (incl. W3C PROV)
- Experience of Scientific Workflows and workflow infrastructures
- A background in High Performance Computing (MPI and OpenMP parallelisation, CUDA etc.)
- Knowledge of distributed data object storage technologies, particularly CEPH
- Experience of at least one of EGI, EUDAT or Indigo-DataCloud
- Experience of container orchestration using tools such as Kubernetes or Mesos
- Proficiency with performance and exception monitoring tools
https://jobs.telegraph.co.uk/job/7617850/data-centric-and-high-performance-computing-experts/
More Information:https://jobs.telegraph.co.uk/job/7617850/data-centric-and-high-performance-computing-experts/
Experiments:MAST, JET, ITER
Deadline: 2017-06-21
Region: Europe
Job description:
Data Centric & High Performance Computing Experts
Job type:Permanent Full Time
Location:Culham, Oxfordshire
Salary:£44,574 to £49,834 (Including Market Premium allowance dependent on experience) plus industry leading benefits including an outstanding pension scheme.
Closing date:21st June 2017.
Fusion, the process that powers the Sun and the stars, could play a significant part in the world`s carbon-free energy portfolio. CCFE is a leading fusion research laboratory, specialising in Magnetic Confinement Fusion (MCF). Our scientists and engineers are working with international partners from industry and academia to develop fusion as a new source of clean energy for tomorrow`s power stations. CCFE is home to MAST, a novel fusion device and flagship of the UK fusion programme, together with the European experiment JET, currently the world`s largest MCF device. In addition, we are key partners in the 16B Euro ITER experiment under construction in the South of France (a device which when fully operational is expected to generate up to 2PB of data per day).
The delivery of world-class fusion science relies upon an array of complex data delivery and computing systems that enable routine, fast access to data resources, together with access to High Performance and High Throughput computing systems, irrespective of whether scientists are at Culham or half way around the world. Advances in data technologies, the development of core ITER relevant infrastructures, and imminent challenges in the era of Big-Data and Exascale computing require further development of our data delivery and scientific computing infrastructure.
We are seeking high calibre developers and data scientists to join our team and work with our national and international partners to successfully deliver projects based upon data delivery, data object storage, and data centric high performance computing. As a highly skilled Research Software Engineer or Data Scientist you will be part of a focused project team with responsibility for innovation and development of CCFE`s computing infrastructure. You will play a key role in systems development, testing, implementation and technical support, focused upon the ITER era of Big Data processing. You will be expected to work with international partners from the academic and industrial world at the frontier of data centric and high performance computing, to give presentations and represent the UKAEA at at national and international events, publishing your work in the academic press where appropriate. For those willing to take on a challenge, this role offers an exciting opportunity to develop a career at the frontier of computing technology and to steer the fusion community`s infrastructure into the ITER era of Big Data and Extreme Scale Computing.
Knowledge skills and experience
Essential
(Candidates will have most, if not all of the following):
- Relevant scientific/engineering or computing degree
- A deep knowledge of the Research Software Lifecycle and associated infrastructure
- Demonstrable experience of at least one high level programming language (C++/C, Fortran etc.) as well as high level scripting languages
- Expertise in UNIX or Linux operating systems, computer cluster management experience and knowledge of virtualisation technologies, particularly cloud computing services or container technology (ideally OpenStack)
- Experience of having worked with networked data infrastructure
- Knowledge of, or ideally experience of having worked within the international fusion community
(Note, a background in computing/modelling within the High Energy Physics arena should deliver most if not all of the skills required by the advertised posts as the HPC/HTC challenges of fusion and HEP are very well aligned).
Desirable
- Experience of cluster-computing framework/Big Data anlytics technology such as Apache Spark/Flink etc.
- An interest or background in Machine Learning (e.g. Tensor Flow), advanced analysis methodologies (e.g. Bayesian Inference based analysis), Uncertainty Quantification techniques etc.
- Experience or interest in provenance capture technology (incl. W3C PROV)
- Experience of Scientific Workflows and workflow infrastructures
- A background in High Performance Computing (MPI and OpenMP parallelisation, CUDA etc.)
- Knowledge of distributed data object storage technologies, particularly CEPH
- Experience of at least one of EGI, EUDAT or Indigo-DataCloud
- Experience of container orchestration using tools such as Kubernetes or Mesos
- Proficiency with performance and exception monitoring tools
https://jobs.telegraph.co.uk/job/7617850/data-centric-and-high-performance-computing-experts/
More Information:https://jobs.telegraph.co.uk/job/7617850/data-centric-and-high-performance-computing-experts/