Search Fields (at least one required): |
XSEDE User Portal (XUP) Online Service The XSEDE User Portal (XUP) provides XSEDE users, collaborators, and staff that have XSEDE accounts access to their "My XSEDE" profile and to information about resources, documentation, allocations, training, and many other resources.
|
XSEDE User Portal (XUP) Mobile Online Service XSEDE User Portal (XUP) for mobile devices
|
Research Software Portal (RSP) Online Service A portal designed to help research software users (researchers, educators, students, application developers), research software developers, and the research computing administrators to work together efficiently by sharing requirements, plans, activity status, and information about available software.
|
XSEDE Globus Connect Server XSEDE Beacon Online Service
|
XSEDE JIRA Online Service XSEDE JIRA issue tracking for staff use
|
XSEDE MyProxy Online Service MyProxy is open source software for managing X.509 Public Key Infrastructure (PKI) security credentials (certificates and private keys). Note: A cron job that runs at NCSA does an XCDB query to generate the grid-mapfile needed by myproxy.xsede.org. XSEDE Allocations, Accounting & Account Management CI (A3M) staff at NCSA are responsible for that cron job.
|
XSEDE Website Online Service The XSEDE website provides public information about XSEDE to the general public, current and potential users, members of the research cyberinfrastructure community, funding organizations, and collaborators.
|
XSEDE Globus Connect Server XSEDE Comet Online Service Comet is a dedicated XSEDE cluster. This endpoint can be used to access data stored on the Comet file system. |
XSEDE Globus Connect Server XSEDE Kyric Online Service Kentucky Research Informatics Cloud (KyRIC) Large Memory Nodes |
XSEDE Globus Connect Server XSEDE Karnak Service Online Service
|
XSEDE Globus Connect Server XSEDE NCAR GLADE Online Service The Globally Accessible Data Environment is a centralized file service that gives users a common view of their data across the HPC, analysis, and visualization resources managed by CISL. This endpoint can be used to access data stored on the GLADE file spaces. |
XSEDE Globus Connect Server XSEDE LSU CCT supermic Online Service SuperMIC is a 925 TFlop Peak Performance Xeon Phi accelerated cluster. SuperMIC has 360 nodes each with 20 Intel Ivybridge 2.8 GHz cores, 64 GB of RAM, and two Intel Xeon Phi 7120P co-processors. There are 20 nodes that have NVIDIA K20X GPUs. This cluster is 40% allocated to the XSEDE user community and 60% dedicated to authorized users of the LSU community. Access is restricted to those who meet the criteria as stated on our website. |
XSEDE Ticket System Online Service XSEDE Ticketing System
|
XSEDE Globus Connect Server TAMU FASTER DTN2 XSEDE Endpoint Online Service
|
XSEDE Metrics on Demand (XDMod) Online Service The XDMoD (XD Metrics on Demand) tool provides HPC center personnel and senior leadership with the ability to easily obtain detailed operational metrics of HPC systems coupled with extensive analytical capability to optimize performance at the system and job level, ensure quality of service, and provide accurate data to guide system upgrades and acquisitions.
|
XSEDE Digital Object Repository (XDOR) Online Service Digital object repository for the Extreme Science and Engineering Discovery Environment (XSEDE) project.
|
XSEDE Resource Allocation Service (XRAS) Online Service The XSEDE Resource Allocation Service (XRAS) is the Web and database service that supports the XSEDE allocation process. It includes a database for storing information about allocation opportunities, allocation proposals, proposal reviews, allocation process results; and Web interfaces for administering allocation processes and for reviewing allocation proposals. It uses the XSEDE User Portal (XUP) as the Web interface for entering allocation proposals.
|
XSEDE Globus Connect Server hpcdev-pub04 Online Service
|
XSEDE Globus Connect Server XSEDE Expanse Online Service
|
XSEDE Globus Connect Server XSEDE PSC bridges Online Service Regular Shared Memory nodes each consist of two Intel Xeon EP-series CPUs and 128GB of 2133 MHz DDR4 RAM configured as 8 DIMMs with 16GB per DIMM. A subset of RSM nodes contain NVIDIA Tesla GPUs: 16 nodes will contain two K80 GPUs each. We anticipate adding 32 RSM nodes with two Pascal GPUs each in late 2016. Bridges contains many hundreds of RSM nodes for capacity and flexibility. |
XSEDE Globus Connect Server XSEDE TACC stampede2 Online Service The new Stampede2 Dell/Intel Knights Landing (KNL) System is configured with 4204 Dell KNL compute nodes, each with a new stand alone Intel Xeon Phi Knights Landing bootable processor. Each KNL node will include 68 cores, 16GB MCDRAM, 96GB DDR-4 memory and a 200GB SSD drive. Stampede2 will deliver an estimated 13PF of peak performance. Compute nodes have access to dedicated Lustre Parallel file systems totaling 28PB raw, provided by Seagate. An Intel Omni-Path Architecture switch fabric connects the nodes and storage through a fat-tree topology with a point to point bandwidth of 100 Gb/s (unidirectional speed). 16 additional login and management servers complete the system. Later in 2017, Stampede2 Phase 2 consisting of next generation Xeon servers and additional management nodes will be deployed. |
XSEDE Globus Connect Server XSEDE UD DARWIN Online Service Collection for XSEDE users to access data on DARWIN |
XSEDE Confluence Wiki Online Service XSEDE Confluence wiki primarily for staff use
|
XSEDE Central Database Online Service XSEDE central resource accounting and user database.
|
XSEDE GitHub Repository Online Service XSEDE Official GitHub project and repositories
|