.. _ch:mpi_io: .. role:: raw-math(raw) :format: latex html Parallel I/O ======================= RAID ----------------------- .. _fig:raid5: .. figure:: ../images/RAID5_task.svg :align: center :width: 50 % Visualization of a RAID 5 system with disk failure at disk 0. .. admonition:: Task #. Reconstruct the missing data from disk 0 in :numref:`fig:raid5` #. Briefly describe the difference between RAID 5 and RAID 6. MPI-IO ----------------------- POSIX provides a model for widely portable file system, but the portability and optimization required for parallel I/O cannot be achieved with the POSIX interface. MPI-IO is an alternative low-level interface specifically designed for parallel file I/O. **MPI_File_open:** Before accessing a file in MPI it needs to be opened with ``MPI_File_open``. .. code-block:: cpp int MPI_File_open(MPI_Comm comm, char *filename, int amode, MPI_Info info, MPI_File * fh); - ``comm``: Communicator of processes that will access file. - ``filename``: name of the file. - ``amode``: access mode of file (RDONLY, RDWR, WRONLY, ...) - ``info``: info object. - ``MPI_File``: opened MPI file. **MPI_File_close:** Before Finalizing MPI all files should be closed with ``MPI_File_close``. .. code-block:: cpp int MPI_File_close(MPI_File * fh); **MPI_File_write_at:** Once a file is opened, it can be accessed with various MPI functions. ``MPI_File_write_at`` writes data to a file with an offset. Different offsets on different processes allow to write in parallel writing without interference. .. code-block:: cpp int MPI_File_write_at(MPI_File fh, MPI_Offset offset, void *buf, int count, MPI_Datatype datatype, MPI_Status * status); - ``fh``: opened MPI file. - ``offset``: the offset from the start of the file in bytes. - ``buf``: pointer to array which is writen in the file. - ``count``: number of elements written in the file. - ``datatype``: datatype of array. - ``status``: status object. .. admonition:: Task **Mandelbrot with MPI-IO** #. Adjust your implementation of the Mandelbrot-Set for MPI-IO #. Use the same specifications (real_min, reak_max, maxIterations, resolution, ...) as in Task 9.3 #. All processes should compute a part of the whole image. #. Use MPI-IO to write the partial images of all processes to a `Portable Pixel Map(.ppm) <https://netpbm.sourceforge.net/doc/ppm.html>`_. - Write the color values as ``MPI_UINT8_T``. - Use the Magic Number ``P6`` to encode that the image is stored in byte format (one byte per color component). #. Be careful when using MPI-IO, start with only a few processes.