The use and development of High Performance Computing in Latin America is steadily growing. New challenges come from the capabilities provided by clusters, grids, and distributed systems for HPC, promoting research and innovation in this area. Building on the great success of the previous nine editions, this year the Latin American Conference on High Performance Computing (CARLA 2018) will be held in Bucaramanga – Colombia, from September 26th to 28th. The main goal of CARLA 2018 is to provide a regional forum fostering the growth of the HPC community in Latin America through the exchange and dissemination of new ideas, techniques, and research in HPC. The conference will feature invited talks from academy and industry, short- and full-paper sessions presenting both mature work and new ideas in research and industrial applications.

Suggested topics of interest include, but are not restricted to:

  • Parallel Algorithms: The development, evaluation and optimization of scalable, general-purpose, high-performance algorithms. Fault Tolerance, syncronization and comunication reducing algorithms and time-space-trade-offs in algorithms.
  • Multicore Architectures and Accelerators: All aspects of high-performance hardware including the optimization and evaluation of processors and networks. Hybrid architectures, trends in HPC and exascale computing.
  • Parallel Programming Techniques: Technologies that support parallel programming for large-scale systems as well as smaller-scale components that will plausibly serve as building blocks for next-generation high-performance computing architectures. Programming models, parallel programming paradigms, compilers tools and libraries, energy efficiency, productivity improvement and good practices for HPC software engineering.
  • Cluster, Grid, Cloud, Fog and Edge Computing, and Federations: All architecture aspects of clouds and distributed computing that are related to high-performance computing systems, including software architecture, configuration, integration, optimization, scalability and evaluation.
  • HPC Education and Outreach: All aspects in the HPC computing applying to enhance the learning and wisdom of the societies. Parallel, distributed and large scale architectures computing in Latin America. Pedagogical issues, educational methods and learning mechanisms. Curriculim design, experience, e-learning, e-laboratory and on-line courses.
  • HPC Infrastructure and Datacenters: High-performance computing (HPC) is the use of parallel processing for running advanced application programs efficiently, reliably and quickly. Data storage systems, monitoring tools, good techniques in management, technology trends and power efficiency.
  • Large Scale Distributed Systems: All software aspects of clouds and distributed computing that are related to high-performance computing systems, including software architecture, configuration, optimization and evaluation. Heterogenety distributed computing models, load balancing, concurrent data structures, integration and storage preservation.
  • Scientific and Industrial Computing: Is the study of the theory, experimentation, and engineering that form the basis for the design and use of computers. It is the scientific and practical approach to computation and its applications. Parallel numerical methods, large scale simulations, accelerated computing applications, highly scalable applications, convergence, workflow management and scalability on future architectures.
  • Modeling and Evalutaion High Performance Applications and Tools: Novel methods and tools for measuring, evaluating, and/or analyzing performance. “Performance” may be broadly construed to include any number of metrics, such as execution time, energy, power, or potential measures of resilience. Power comsumption, performance prediction and robustness evaluation.
  • Data Analytics, Data Management and Data Visualization: All aspects of data analytics, visualization and storage related to high-performance computing systems, from Big Data to Smart Data. Storage, memory systems, File Systems, Data intensive applications, visual analytics and in-situ analytics.
  • AI, Machine Learning, Deep Learning: Within the last few years, machine learning and AI has become a vital topic within the HPC community.  Related applications are changing HPC architectures, due they present challenges towards scalable machine learning on HPC systems, allowing special software development using advanced computing techniques and large scale infrastructures.
  • Special Topics in Advanced Computing: New trends in technology for large scale and advanced computing systems. Quamtum computing, FPGA based systems, streaming data-flow architectures, manycore co-processors, high degree parallelization systems and customization.