{"id":23518,"date":"2023-10-17T11:34:42","date_gmt":"2023-10-17T09:34:42","guid":{"rendered":"https:\/\/info.gwdg.de\/news\/?p=23518"},"modified":"2023-10-17T11:48:27","modified_gmt":"2023-10-17T09:48:27","slug":"parallel-3d-point-cloud-data-analysis-with-dask","status":"publish","type":"post","link":"https:\/\/info.gwdg.de\/news\/parallel-3d-point-cloud-data-analysis-with-dask\/","title":{"rendered":"Parallel 3D Point Cloud Data analysis with Dask"},"content":{"rendered":"<p dir=\"auto\" data-sourcepos=\"8:1-8:812\">There are many python packages that are tightly integrated with Dask which enables parallel data processing. For instance, consider <code>xarray<\/code> package. This package is used to read datasets in netcdf, hdf5, zarr file formats. Dask comes in play when data is read from multile files where each file is treated as a chunk or by explictly specifying <code>chunks<\/code> parameter to describe how the data needs to be chunked. <code>laspy<\/code> python package is used for reading 3D point cloud data but unfortunately it lacks integration with Dask but still it is possible to use it when data is read from multiple files. Although this techique of loading data from multiple files is quite well known, having the <code>chunks<\/code> parameter feature adds a great deal of flexibility. In this article, a demonstration of such a feature is presented.<\/p>\n<p dir=\"auto\" data-sourcepos=\"10:1-10:240\">A more permanent version of this article with updates and fixes can be found in our <a href=\"https:\/\/gitlab-ce.gwdg.de\/hpc-team-public\/science-domains-blog\/-\/blob\/main\/20231017_parallel-3d-point-cloud-data-analysis-with-dask.md\" class=\"external\" rel=\"nofollow\">Gitlab repository<\/a>.<\/p>\n<h2 dir=\"auto\" data-sourcepos=\"10:1-10:240\">3D Point Cloud format &#8211; Las\/Laz<\/h2>\n<p>A brief overview of the las format. The 3D point cloud data is stored as a las file. As shown in the figure below the las file consists of header, variable length records, point cloud records and extended variable length records. Depending on the version of the las format specification, few details may vary in fields stored in point cloud records and may not have extended variable length records.<\/p>\n<p><img decoding=\"async\" class=\"transparent\" src=\"https:\/\/pad.gwdg.de\/uploads\/9738de9a-6347-417d-9468-894bd2f1a859.png\" alt=\"https:\/\/pad.gwdg.de\/uploads\/9738de9a-6347-417d-9468-894bd2f1a859.png\" \/><\/p>\n<p dir=\"auto\" data-sourcepos=\"18:1-18:300\">The header part has the metadata about the file contents like format version, number of point cloud records, size of each record in bytes, offset byte where the point cloud record starts, data-type of fields in the record, minimum and maximum values of the spatial co-ordinates, scaling factor, etc.,<\/p>\n<p dir=\"auto\" data-sourcepos=\"20:1-20:301\"><code>laspy<\/code> package provides convenience functions not just to read point cloud records but also allows to investigate metadata. Perhaps one interesting thing to figure out from metadata is memory requirements to load a las file. Here is a small utility function called <code>info<\/code> to provide this information.<\/p>\n<pre class=\"code highlight\" lang=\"python\"><span id=\"LC1\" class=\"line\" lang=\"python\"><span class=\"kn\">import<\/span> <span class=\"n\">humanize<\/span><\/span>\r\n<span id=\"LC2\" class=\"line\" lang=\"python\"><span class=\"kn\">import<\/span> <span class=\"n\">laspy<\/span><\/span>\r\n<span id=\"LC3\" class=\"line\" lang=\"python\"><span class=\"kn\">import<\/span> <span class=\"n\">os<\/span><\/span>\r\n<span id=\"LC4\" class=\"line\" lang=\"python\"><\/span>\r\n<span id=\"LC5\" class=\"line\" lang=\"python\"><span class=\"k\">def<\/span> <span class=\"nf\">info<\/span><span class=\"p\">(<\/span><span class=\"n\">filename<\/span><span class=\"p\">):<\/span><\/span>\r\n<span id=\"LC6\" class=\"line\" lang=\"python\">    <span class=\"n\">naturalsize<\/span> <span class=\"o\">=<\/span> <span class=\"k\">lambda<\/span> <span class=\"n\">x<\/span><span class=\"p\">:<\/span> <span class=\"n\">humanize<\/span><span class=\"p\">.<\/span><span class=\"nf\">naturalsize<\/span><span class=\"p\">(<\/span><span class=\"n\">x<\/span><span class=\"p\">,<\/span> <span class=\"n\">binary<\/span><span class=\"o\">=<\/span><span class=\"bp\">True<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC7\" class=\"line\" lang=\"python\">    <span class=\"k\">with<\/span> <span class=\"nf\">open<\/span><span class=\"p\">(<\/span><span class=\"n\">filename<\/span><span class=\"p\">,<\/span> <span class=\"sh\">\"<\/span><span class=\"s\">rb<\/span><span class=\"sh\">\"<\/span><span class=\"p\">)<\/span> <span class=\"k\">as<\/span> <span class=\"n\">fid<\/span><span class=\"p\">:<\/span><\/span>\r\n<span id=\"LC8\" class=\"line\" lang=\"python\">        <span class=\"n\">reader<\/span> <span class=\"o\">=<\/span> <span class=\"n\">laspy<\/span><span class=\"p\">.<\/span><span class=\"nc\">LasReader<\/span><span class=\"p\">(<\/span><span class=\"n\">fid<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC9\" class=\"line\" lang=\"python\">        <span class=\"n\">npoints<\/span> <span class=\"o\">=<\/span> <span class=\"n\">reader<\/span><span class=\"p\">.<\/span><span class=\"n\">header<\/span><span class=\"p\">.<\/span><span class=\"n\">point_count<\/span><\/span>\r\n<span id=\"LC10\" class=\"line\" lang=\"python\">        <span class=\"n\">point_size<\/span> <span class=\"o\">=<\/span> <span class=\"n\">reader<\/span><span class=\"p\">.<\/span><span class=\"n\">header<\/span><span class=\"p\">.<\/span><span class=\"n\">point_format<\/span><span class=\"p\">.<\/span><span class=\"n\">size<\/span><\/span>\r\n<span id=\"LC11\" class=\"line\" lang=\"python\">        <span class=\"n\">memory_size<\/span> <span class=\"o\">=<\/span> <span class=\"n\">npoints<\/span> <span class=\"o\">*<\/span> <span class=\"n\">point_size<\/span><\/span>\r\n<span id=\"LC12\" class=\"line\" lang=\"python\">    <span class=\"nf\">print<\/span><span class=\"p\">(<\/span><span class=\"sa\">f<\/span><span class=\"sh\">\"<\/span><span class=\"s\">Filesize on disk: <\/span><span class=\"si\">{<\/span><span class=\"nf\">naturalsize<\/span><span class=\"p\">(<\/span><span class=\"n\">os<\/span><span class=\"p\">.<\/span><span class=\"n\">path<\/span><span class=\"p\">.<\/span><span class=\"nf\">getsize<\/span><span class=\"p\">(<\/span><span class=\"n\">filename<\/span><span class=\"p\">))<\/span><span class=\"si\">}<\/span><span class=\"sh\">\"<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC13\" class=\"line\" lang=\"python\">    <span class=\"nf\">print<\/span><span class=\"p\">(<\/span><span class=\"sa\">f<\/span><span class=\"sh\">\"<\/span><span class=\"s\">Is data compressed: <\/span><span class=\"si\">{<\/span><span class=\"n\">reader<\/span><span class=\"p\">.<\/span><span class=\"n\">header<\/span><span class=\"p\">.<\/span><span class=\"n\">are_points_compressed<\/span><span class=\"si\">}<\/span><span class=\"sh\">\"<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC14\" class=\"line\" lang=\"python\">    <span class=\"nf\">print<\/span><span class=\"p\">(<\/span><span class=\"sa\">f<\/span><span class=\"sh\">\"<\/span><span class=\"s\">records (points): <\/span><span class=\"si\">{<\/span><span class=\"n\">npoints<\/span><span class=\"si\">}<\/span><span class=\"s\"> (<\/span><span class=\"si\">{<\/span><span class=\"n\">humanize<\/span><span class=\"p\">.<\/span><span class=\"nf\">intword<\/span><span class=\"p\">(<\/span><span class=\"n\">npoints<\/span><span class=\"p\">)<\/span><span class=\"si\">}<\/span><span class=\"s\">)<\/span><span class=\"sh\">\"<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC15\" class=\"line\" lang=\"python\">    <span class=\"nf\">print<\/span><span class=\"p\">(<\/span><span class=\"sa\">f<\/span><span class=\"sh\">\"<\/span><span class=\"s\">Each record size: <\/span><span class=\"si\">{<\/span><span class=\"n\">point_size<\/span><span class=\"si\">}<\/span><span class=\"s\"> (<\/span><span class=\"si\">{<\/span><span class=\"nf\">naturalsize<\/span><span class=\"p\">(<\/span><span class=\"n\">point_size<\/span><span class=\"p\">)<\/span><span class=\"si\">}<\/span><span class=\"s\">)<\/span><span class=\"sh\">\"<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC16\" class=\"line\" lang=\"python\">    <span class=\"nf\">print<\/span><span class=\"p\">(<\/span><span class=\"sa\">f<\/span><span class=\"sh\">\"<\/span><span class=\"s\">Memory required: <\/span><span class=\"si\">{<\/span><span class=\"n\">memory_size<\/span><span class=\"si\">}<\/span><span class=\"s\"> (<\/span><span class=\"si\">{<\/span><span class=\"nf\">naturalsize<\/span><span class=\"p\">(<\/span><span class=\"n\">memory_size<\/span><span class=\"p\">)<\/span><span class=\"si\">}<\/span><span class=\"s\">)<\/span><span class=\"sh\">\"<\/span><span class=\"p\">)<\/span><\/span>\r\n<\/pre>\n<p dir=\"auto\" data-sourcepos=\"20:1-20:301\">Using the <code>info<\/code> function to determine the memory requirements of a las file called <code>densel-20230906-103153.laz<\/code><\/p>\n<pre class=\"code highlight\" lang=\"python\"><span id=\"LC1\" class=\"line\" lang=\"python\"><span class=\"n\">filename<\/span> <span class=\"o\">=<\/span> <span class=\"sh\">\"<\/span><span class=\"s\">.\/dense1-20230906-103153.laz<\/span><span class=\"sh\">\"<\/span><\/span>\r\n<span id=\"LC2\" class=\"line\" lang=\"python\"><span class=\"nf\">info<\/span><span class=\"p\">(<\/span><span class=\"n\">filename<\/span><span class=\"p\">)<\/span><\/span>\r\n<\/pre>\n<pre class=\"code highlight\" lang=\"plaintext\"><span id=\"LC1\" class=\"line\" lang=\"plaintext\">Filesize on disk: 2.1 GiB<\/span>\r\n<span id=\"LC2\" class=\"line\" lang=\"plaintext\">Is data compressed: True<\/span>\r\n<span id=\"LC3\" class=\"line\" lang=\"plaintext\">records (points): 148919876 (148.9 million)<\/span>\r\n<span id=\"LC4\" class=\"line\" lang=\"plaintext\">Each record size: 54 (54 Bytes)<\/span>\r\n<span id=\"LC5\" class=\"line\" lang=\"plaintext\">Memory required: 8041673304 (7.5 GiB)<\/span>\r\n\r\n\r\n<\/pre>\n<p dir=\"auto\" data-sourcepos=\"58:1-58:315\">Although this file occupies just 2.1 GiB in disk, the uncompressed data in memory requires at least 4 times more space in memory (7.5 GiB). This function is quite useful in planning compute node resource requirements. <code>\"Each record size\"<\/code> indicates the number of bytes required to store the compressed data on disk.<\/p>\n<p dir=\"auto\" data-sourcepos=\"60:1-60:303\">As mentioned in the introduction, the idea is not to physically split the file into multiple smaller files but logically divide the single file into multiple chunks so Dask can work with these chunks. The immediate question that arises is how to chunk the data. The criteria can be any of the following:<\/p>\n<ul>\n<li>by number of points per chunk (example: 2 million per chunk)<\/li>\n<li>by size of each chunk (example: 100MB per chunk)<\/li>\n<li>dividing data into <code>n<\/code> number of partitions (example: 100 partitions. i.e., 100 chunks)<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p>Extending the <code>info<\/code> function to incorporate these features<\/p>\n<pre class=\"code highlight\" lang=\"python\"><span id=\"LC1\" class=\"line\" lang=\"python\"><span class=\"kn\">import<\/span> <span class=\"n\">re<\/span><\/span>\r\n<span id=\"LC2\" class=\"line\" lang=\"python\"><span class=\"kn\">from<\/span> <span class=\"n\">dask.utils<\/span> <span class=\"kn\">import<\/span> <span class=\"n\">byte_sizes<\/span><\/span>\r\n<span id=\"LC3\" class=\"line\" lang=\"python\"><\/span>\r\n<span id=\"LC4\" class=\"line\" lang=\"python\"><\/span>\r\n<span id=\"LC5\" class=\"line\" lang=\"python\"><span class=\"k\">def<\/span> <span class=\"nf\">convert_humansize_to_bytes<\/span><span class=\"p\">(<\/span><span class=\"n\">value<\/span><span class=\"p\">:<\/span> <span class=\"nb\">str<\/span><span class=\"p\">):<\/span><\/span>\r\n<span id=\"LC6\" class=\"line\" lang=\"python\">    <span class=\"n\">match<\/span> <span class=\"o\">=<\/span> <span class=\"n\">re<\/span><span class=\"p\">.<\/span><span class=\"nf\">search<\/span><span class=\"p\">(<\/span><span class=\"sh\">'<\/span><span class=\"s\">([^\\.eE\\d]+)<\/span><span class=\"sh\">'<\/span><span class=\"p\">,<\/span> <span class=\"nf\">str<\/span><span class=\"p\">(<\/span><span class=\"n\">value<\/span><span class=\"p\">))<\/span><\/span>\r\n<span id=\"LC7\" class=\"line\" lang=\"python\">    <span class=\"n\">nbytes<\/span> <span class=\"o\">=<\/span> <span class=\"mi\">1<\/span><\/span>\r\n<span id=\"LC8\" class=\"line\" lang=\"python\">    <span class=\"k\">if<\/span> <span class=\"n\">match<\/span><span class=\"p\">:<\/span><\/span>\r\n<span id=\"LC9\" class=\"line\" lang=\"python\">        <span class=\"n\">start<\/span> <span class=\"o\">=<\/span> <span class=\"n\">match<\/span><span class=\"p\">.<\/span><span class=\"nf\">start<\/span><span class=\"p\">()<\/span><\/span>\r\n<span id=\"LC10\" class=\"line\" lang=\"python\">        <span class=\"n\">text<\/span> <span class=\"o\">=<\/span> <span class=\"n\">match<\/span><span class=\"p\">.<\/span><span class=\"nf\">group<\/span><span class=\"p\">().<\/span><span class=\"nf\">strip<\/span><span class=\"p\">().<\/span><span class=\"nf\">lower<\/span><span class=\"p\">()<\/span><\/span>\r\n<span id=\"LC11\" class=\"line\" lang=\"python\">        <span class=\"n\">nbytes<\/span> <span class=\"o\">=<\/span> <span class=\"n\">byte_sizes<\/span><span class=\"p\">[<\/span><span class=\"n\">text<\/span><span class=\"p\">]<\/span><\/span>\r\n<span id=\"LC12\" class=\"line\" lang=\"python\">        <span class=\"n\">value<\/span> <span class=\"o\">=<\/span> <span class=\"n\">value<\/span><span class=\"p\">[:<\/span><span class=\"n\">start<\/span><span class=\"p\">]<\/span><\/span>\r\n<span id=\"LC13\" class=\"line\" lang=\"python\">    <span class=\"n\">value<\/span> <span class=\"o\">=<\/span> <span class=\"nf\">int<\/span><span class=\"p\">(<\/span><span class=\"nf\">float<\/span><span class=\"p\">(<\/span><span class=\"n\">value<\/span><span class=\"p\">))<\/span> <span class=\"o\">*<\/span> <span class=\"n\">nbytes<\/span><\/span>\r\n<span id=\"LC14\" class=\"line\" lang=\"python\">    <span class=\"k\">return<\/span> <span class=\"n\">value<\/span><\/span>\r\n<span id=\"LC15\" class=\"line\" lang=\"python\"><\/span>\r\n<span id=\"LC16\" class=\"line\" lang=\"python\"><\/span>\r\n<span id=\"LC17\" class=\"line\" lang=\"python\"><span class=\"k\">def<\/span> <span class=\"nf\">info<\/span><span class=\"p\">(<\/span><\/span>\r\n<span id=\"LC18\" class=\"line\" lang=\"python\">        <span class=\"n\">filename<\/span><span class=\"p\">,<\/span><\/span>\r\n<span id=\"LC19\" class=\"line\" lang=\"python\">        <span class=\"n\">partition_by_size<\/span><span class=\"p\">:<\/span> <span class=\"nb\">str<\/span> <span class=\"o\">=<\/span> <span class=\"bp\">None<\/span><span class=\"p\">,<\/span><\/span>\r\n<span id=\"LC20\" class=\"line\" lang=\"python\">        <span class=\"n\">partition_by_points<\/span><span class=\"p\">:<\/span> <span class=\"nb\">int<\/span> <span class=\"o\">=<\/span> <span class=\"bp\">None<\/span><span class=\"p\">,<\/span><\/span>\r\n<span id=\"LC21\" class=\"line\" lang=\"python\">        <span class=\"n\">partition_by_partitions<\/span><span class=\"p\">:<\/span> <span class=\"nb\">int<\/span> <span class=\"o\">=<\/span> <span class=\"bp\">None<\/span><span class=\"p\">,<\/span><\/span>\r\n<span id=\"LC22\" class=\"line\" lang=\"python\">        <span class=\"n\">echo<\/span><span class=\"p\">:<\/span> <span class=\"nb\">bool<\/span> <span class=\"o\">=<\/span> <span class=\"bp\">True<\/span><span class=\"p\">,<\/span><\/span>\r\n<span id=\"LC23\" class=\"line\" lang=\"python\">        <span class=\"n\">return_batch_points<\/span><span class=\"p\">:<\/span> <span class=\"nb\">bool<\/span> <span class=\"o\">=<\/span> <span class=\"bp\">False<\/span><\/span>\r\n<span id=\"LC24\" class=\"line\" lang=\"python\"><span class=\"p\">):<\/span><\/span>\r\n<span id=\"LC25\" class=\"line\" lang=\"python\"><\/span>\r\n<span id=\"LC26\" class=\"line\" lang=\"python\">    <span class=\"n\">naturalsize<\/span> <span class=\"o\">=<\/span> <span class=\"k\">lambda<\/span> <span class=\"n\">x<\/span><span class=\"p\">:<\/span> <span class=\"n\">humanize<\/span><span class=\"p\">.<\/span><span class=\"nf\">naturalsize<\/span><span class=\"p\">(<\/span><span class=\"n\">x<\/span><span class=\"p\">,<\/span> <span class=\"n\">binary<\/span><span class=\"o\">=<\/span><span class=\"bp\">True<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC27\" class=\"line\" lang=\"python\">    <span class=\"k\">with<\/span> <span class=\"nf\">open<\/span><span class=\"p\">(<\/span><span class=\"n\">filename<\/span><span class=\"p\">,<\/span> <span class=\"sh\">\"<\/span><span class=\"s\">rb<\/span><span class=\"sh\">\"<\/span><span class=\"p\">)<\/span> <span class=\"k\">as<\/span> <span class=\"n\">fid<\/span><span class=\"p\">:<\/span><\/span>\r\n<span id=\"LC28\" class=\"line\" lang=\"python\">        <span class=\"n\">reader<\/span> <span class=\"o\">=<\/span> <span class=\"n\">laspy<\/span><span class=\"p\">.<\/span><span class=\"nc\">LasReader<\/span><span class=\"p\">(<\/span><span class=\"n\">fid<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC29\" class=\"line\" lang=\"python\">        <span class=\"n\">npoints<\/span> <span class=\"o\">=<\/span> <span class=\"n\">reader<\/span><span class=\"p\">.<\/span><span class=\"n\">header<\/span><span class=\"p\">.<\/span><span class=\"n\">point_count<\/span><\/span>\r\n<span id=\"LC30\" class=\"line\" lang=\"python\">        <span class=\"n\">point_size<\/span> <span class=\"o\">=<\/span> <span class=\"n\">reader<\/span><span class=\"p\">.<\/span><span class=\"n\">header<\/span><span class=\"p\">.<\/span><span class=\"n\">point_format<\/span><span class=\"p\">.<\/span><span class=\"n\">size<\/span><\/span>\r\n<span id=\"LC31\" class=\"line\" lang=\"python\">        <span class=\"n\">memory_size<\/span> <span class=\"o\">=<\/span> <span class=\"n\">npoints<\/span> <span class=\"o\">*<\/span> <span class=\"n\">point_size<\/span><\/span>\r\n<span id=\"LC32\" class=\"line\" lang=\"python\">        <span class=\"k\">if<\/span> <span class=\"n\">echo<\/span><span class=\"p\">:<\/span><\/span>\r\n<span id=\"LC33\" class=\"line\" lang=\"python\">            <span class=\"nf\">print<\/span><span class=\"p\">(<\/span><span class=\"sa\">f<\/span><span class=\"sh\">\"<\/span><span class=\"s\">Filesize on disk: <\/span><span class=\"si\">{<\/span><span class=\"nf\">naturalsize<\/span><span class=\"p\">(<\/span><span class=\"n\">os<\/span><span class=\"p\">.<\/span><span class=\"n\">path<\/span><span class=\"p\">.<\/span><span class=\"nf\">getsize<\/span><span class=\"p\">(<\/span><span class=\"n\">filename<\/span><span class=\"p\">))<\/span><span class=\"si\">}<\/span><span class=\"sh\">\"<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC34\" class=\"line\" lang=\"python\">            <span class=\"nf\">print<\/span><span class=\"p\">(<\/span><span class=\"sa\">f<\/span><span class=\"sh\">\"<\/span><span class=\"s\">Is data compressed: <\/span><span class=\"si\">{<\/span><span class=\"n\">reader<\/span><span class=\"p\">.<\/span><span class=\"n\">header<\/span><span class=\"p\">.<\/span><span class=\"n\">are_points_compressed<\/span><span class=\"si\">}<\/span><span class=\"sh\">\"<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC35\" class=\"line\" lang=\"python\">            <span class=\"nf\">print<\/span><span class=\"p\">(<\/span><span class=\"sa\">f<\/span><span class=\"sh\">\"<\/span><span class=\"s\">records (points): <\/span><span class=\"si\">{<\/span><span class=\"n\">npoints<\/span><span class=\"si\">}<\/span><span class=\"s\"> (<\/span><span class=\"si\">{<\/span><span class=\"n\">humanize<\/span><span class=\"p\">.<\/span><span class=\"nf\">intword<\/span><span class=\"p\">(<\/span><span class=\"n\">npoints<\/span><span class=\"p\">)<\/span><span class=\"si\">}<\/span><span class=\"s\">)<\/span><span class=\"sh\">\"<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC36\" class=\"line\" lang=\"python\">            <span class=\"nf\">print<\/span><span class=\"p\">(<\/span><span class=\"sa\">f<\/span><span class=\"sh\">\"<\/span><span class=\"s\">Each record size: <\/span><span class=\"si\">{<\/span><span class=\"n\">point_size<\/span><span class=\"si\">}<\/span><span class=\"s\"> (<\/span><span class=\"si\">{<\/span><span class=\"nf\">naturalsize<\/span><span class=\"p\">(<\/span><span class=\"n\">point_size<\/span><span class=\"p\">)<\/span><span class=\"si\">}<\/span><span class=\"s\">)<\/span><span class=\"sh\">\"<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC37\" class=\"line\" lang=\"python\">            <span class=\"nf\">print<\/span><span class=\"p\">(<\/span><span class=\"sa\">f<\/span><span class=\"sh\">\"<\/span><span class=\"s\">Memory required: <\/span><span class=\"si\">{<\/span><span class=\"n\">memory_size<\/span><span class=\"si\">}<\/span><span class=\"s\"> (<\/span><span class=\"si\">{<\/span><span class=\"nf\">naturalsize<\/span><span class=\"p\">(<\/span><span class=\"n\">memory_size<\/span><span class=\"p\">)<\/span><span class=\"si\">}<\/span><span class=\"s\">)<\/span><span class=\"sh\">\"<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC38\" class=\"line\" lang=\"python\">    <span class=\"k\">if<\/span> <span class=\"n\">partition_by_partitions<\/span><span class=\"p\">:<\/span><\/span>\r\n<span id=\"LC39\" class=\"line\" lang=\"python\">        <span class=\"n\">batch_size<\/span> <span class=\"o\">=<\/span> <span class=\"nf\">round<\/span><span class=\"p\">(<\/span><span class=\"n\">memory_size<\/span><span class=\"o\">\/<\/span><span class=\"n\">partition_by_partitions<\/span><span class=\"o\">\/<\/span><span class=\"mi\">10<\/span><span class=\"p\">)<\/span><span class=\"o\">*<\/span><span class=\"mi\">10<\/span><\/span>\r\n<span id=\"LC40\" class=\"line\" lang=\"python\">        <span class=\"n\">batch_points<\/span> <span class=\"o\">=<\/span> <span class=\"nf\">round<\/span><span class=\"p\">(<\/span><span class=\"n\">batch_size<\/span><span class=\"o\">\/<\/span><span class=\"n\">point_size<\/span><span class=\"o\">\/<\/span><span class=\"mi\">10<\/span><span class=\"p\">)<\/span><span class=\"o\">*<\/span><span class=\"mi\">10<\/span><\/span>\r\n<span id=\"LC41\" class=\"line\" lang=\"python\">        <span class=\"n\">batch_size<\/span> <span class=\"o\">=<\/span> <span class=\"n\">batch_points<\/span> <span class=\"o\">*<\/span> <span class=\"n\">point_size<\/span><\/span>\r\n<span id=\"LC42\" class=\"line\" lang=\"python\">    <span class=\"k\">elif<\/span> <span class=\"n\">partition_by_points<\/span><span class=\"p\">:<\/span><\/span>\r\n<span id=\"LC43\" class=\"line\" lang=\"python\">        <span class=\"n\">batch_points<\/span> <span class=\"o\">=<\/span> <span class=\"n\">partition_by_points<\/span><\/span>\r\n<span id=\"LC44\" class=\"line\" lang=\"python\">        <span class=\"n\">batch_size<\/span> <span class=\"o\">=<\/span> <span class=\"n\">batch_points<\/span> <span class=\"o\">*<\/span> <span class=\"n\">point_size<\/span><\/span>\r\n<span id=\"LC45\" class=\"line\" lang=\"python\">    <span class=\"k\">elif<\/span> <span class=\"n\">partition_by_size<\/span><span class=\"p\">:<\/span><\/span>\r\n<span id=\"LC46\" class=\"line\" lang=\"python\">        <span class=\"n\">batch_size<\/span> <span class=\"o\">=<\/span> <span class=\"nf\">convert_humansize_to_bytes<\/span><span class=\"p\">(<\/span><span class=\"n\">partition_by_size<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC47\" class=\"line\" lang=\"python\">        <span class=\"n\">batch_points<\/span> <span class=\"o\">=<\/span> <span class=\"nf\">round<\/span><span class=\"p\">(<\/span><span class=\"n\">batch_size<\/span><span class=\"o\">\/<\/span><span class=\"n\">point_size<\/span><span class=\"o\">\/<\/span><span class=\"mi\">10<\/span><span class=\"p\">)<\/span><span class=\"o\">*<\/span><span class=\"mi\">10<\/span><\/span>\r\n<span id=\"LC48\" class=\"line\" lang=\"python\">        <span class=\"n\">batch_size<\/span> <span class=\"o\">=<\/span> <span class=\"n\">batch_points<\/span> <span class=\"o\">*<\/span> <span class=\"n\">point_size<\/span><\/span>\r\n<span id=\"LC49\" class=\"line\" lang=\"python\">    <span class=\"k\">else<\/span><span class=\"p\">:<\/span><\/span>\r\n<span id=\"LC50\" class=\"line\" lang=\"python\">        <span class=\"n\">batch_size<\/span> <span class=\"o\">=<\/span> <span class=\"n\">memory_size<\/span><\/span>\r\n<span id=\"LC51\" class=\"line\" lang=\"python\">        <span class=\"n\">batch_points<\/span> <span class=\"o\">=<\/span> <span class=\"n\">npoints<\/span><\/span>\r\n<span id=\"LC52\" class=\"line\" lang=\"python\"><\/span>\r\n<span id=\"LC53\" class=\"line\" lang=\"python\">    <span class=\"n\">nbatches<\/span> <span class=\"o\">=<\/span> <span class=\"nf\">len<\/span><span class=\"p\">(<\/span><span class=\"nf\">range<\/span><span class=\"p\">(<\/span><span class=\"mi\">0<\/span><span class=\"p\">,<\/span> <span class=\"n\">npoints<\/span><span class=\"p\">,<\/span> <span class=\"n\">batch_points<\/span><span class=\"p\">))<\/span><\/span>\r\n<span id=\"LC54\" class=\"line\" lang=\"python\">    <span class=\"k\">if<\/span> <span class=\"nf\">any<\/span><span class=\"p\">([<\/span><span class=\"n\">partition_by_partitions<\/span><span class=\"p\">,<\/span> <span class=\"n\">partition_by_points<\/span><span class=\"p\">,<\/span> <span class=\"n\">partition_by_size<\/span><span class=\"p\">])<\/span> <span class=\"ow\">and<\/span> <span class=\"n\">echo<\/span><span class=\"p\">:<\/span><\/span>\r\n<span id=\"LC55\" class=\"line\" lang=\"python\">        <span class=\"nf\">print<\/span><span class=\"p\">(<\/span><span class=\"sh\">\"<\/span><span class=\"s\">---<\/span><span class=\"sh\">\"<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC56\" class=\"line\" lang=\"python\">        <span class=\"nf\">print<\/span><span class=\"p\">(<\/span><span class=\"sa\">f<\/span><span class=\"sh\">\"<\/span><span class=\"s\">  chunk_size = <\/span><span class=\"si\">{<\/span><span class=\"n\">batch_size<\/span><span class=\"si\">}<\/span><span class=\"s\"> (<\/span><span class=\"si\">{<\/span><span class=\"nf\">naturalsize<\/span><span class=\"p\">(<\/span><span class=\"n\">batch_size<\/span><span class=\"p\">)<\/span><span class=\"si\">}<\/span><span class=\"s\">)<\/span><span class=\"sh\">\"<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC57\" class=\"line\" lang=\"python\">        <span class=\"nf\">print<\/span><span class=\"p\">(<\/span><span class=\"sa\">f<\/span><span class=\"sh\">\"<\/span><span class=\"s\">  points_per_chunk = <\/span><span class=\"si\">{<\/span><span class=\"n\">batch_points<\/span><span class=\"si\">}<\/span><span class=\"s\"> (<\/span><span class=\"si\">{<\/span><span class=\"n\">humanize<\/span><span class=\"p\">.<\/span><span class=\"nf\">intword<\/span><span class=\"p\">(<\/span><span class=\"n\">batch_points<\/span><span class=\"p\">)<\/span><span class=\"si\">}<\/span><span class=\"s\">)<\/span><span class=\"sh\">\"<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC58\" class=\"line\" lang=\"python\">        <span class=\"nf\">print<\/span><span class=\"p\">(<\/span><span class=\"sa\">f<\/span><span class=\"sh\">\"<\/span><span class=\"s\">  nchunks = <\/span><span class=\"si\">{<\/span><span class=\"n\">nbatches<\/span><span class=\"si\">}<\/span><span class=\"sh\">\"<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC59\" class=\"line\" lang=\"python\">        <span class=\"nf\">print<\/span><span class=\"p\">(<\/span><span class=\"sh\">\"<\/span><span class=\"s\">---<\/span><span class=\"sh\">\"<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC60\" class=\"line\" lang=\"python\">    <span class=\"k\">if<\/span> <span class=\"n\">return_batch_points<\/span><span class=\"p\">:<\/span><\/span>\r\n<span id=\"LC61\" class=\"line\" lang=\"python\">        <span class=\"k\">return<\/span> <span class=\"n\">batch_points<\/span><\/span>\r\n\r\n\r\n<\/pre>\n<p dir=\"auto\" data-sourcepos=\"133:1-133:90\">Trying out the <code>info<\/code> function to see in action to see how these various options look like<\/p>\n<p dir=\"auto\" data-sourcepos=\"135:1-135:17\"><strong>chunk by size<\/strong><\/p>\n<p dir=\"auto\" data-sourcepos=\"137:1-137:57\">chunking the data with each chunk is approximately 100 MB<\/p>\n<pre class=\"code highlight\" lang=\"plaintext\"><span id=\"LC1\" class=\"line\" lang=\"plaintext\">&gt;&gt;&gt; filename = \".\/dense1-20230906-103153.laz\"<\/span>\r\n<span id=\"LC2\" class=\"line\" lang=\"plaintext\">&gt;&gt;&gt; info(filename, partition_by_size=\"100MB\")<\/span>\r\n<span id=\"LC3\" class=\"line\" lang=\"plaintext\">Filesize on disk: 2.1 GiB<\/span>\r\n<span id=\"LC4\" class=\"line\" lang=\"plaintext\">Is data compressed: True<\/span>\r\n<span id=\"LC5\" class=\"line\" lang=\"plaintext\">records (points): 148919876 (148.9 million)<\/span>\r\n<span id=\"LC6\" class=\"line\" lang=\"plaintext\">Each record size: 54 (54 Bytes)<\/span>\r\n<span id=\"LC7\" class=\"line\" lang=\"plaintext\">Memory required: 8041673304 (7.5 GiB)<\/span>\r\n<span id=\"LC8\" class=\"line\" lang=\"plaintext\">---<\/span>\r\n<span id=\"LC9\" class=\"line\" lang=\"plaintext\">  chunk_size = 99999900 (95.4 MiB)<\/span>\r\n<span id=\"LC10\" class=\"line\" lang=\"plaintext\">  points_per_chunk = 1851850 (1.9 million)<\/span>\r\n<span id=\"LC11\" class=\"line\" lang=\"plaintext\">  nchunks = 81<\/span>\r\n<span id=\"LC12\" class=\"line\" lang=\"plaintext\">---<\/span>\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p>As you may notice the chunk size is little less than <code>100 MB<\/code>. This is because <code>1MB<\/code> translates to <code>1000 bytes<\/code>. If <code>1024 bytes<\/code> per <code>MB<\/code> is required then use <code>MiB<\/code> notation.<\/p>\n<pre class=\"code highlight\" lang=\"plaintext\"><span id=\"LC1\" class=\"line\" lang=\"plaintext\">&gt;&gt;&gt; info(filename, partition_by_size=\"100MiB\")<\/span>\r\n<span id=\"LC2\" class=\"line\" lang=\"plaintext\">Filesize on disk: 2.1 GiB<\/span>\r\n<span id=\"LC3\" class=\"line\" lang=\"plaintext\">Is data compressed: True<\/span>\r\n<span id=\"LC4\" class=\"line\" lang=\"plaintext\">records (points): 148919876 (148.9 million)<\/span>\r\n<span id=\"LC5\" class=\"line\" lang=\"plaintext\">Each record size: 54 (54 Bytes)<\/span>\r\n<span id=\"LC6\" class=\"line\" lang=\"plaintext\">Memory required: 8041673304 (7.5 GiB)<\/span>\r\n<span id=\"LC7\" class=\"line\" lang=\"plaintext\">---<\/span>\r\n<span id=\"LC8\" class=\"line\" lang=\"plaintext\">  chunk_size = 104857740 (100.0 MiB)<\/span>\r\n<span id=\"LC9\" class=\"line\" lang=\"plaintext\">  points_per_chunk = 1941810 (1.9 million)<\/span>\r\n<span id=\"LC10\" class=\"line\" lang=\"plaintext\">  nchunks = 77<\/span>\r\n<span id=\"LC11\" class=\"line\" lang=\"plaintext\">---<\/span>\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><strong>chunk by points<\/strong><\/p>\n<pre class=\"code highlight\" lang=\"plaintext\"><span id=\"LC1\" class=\"line\" lang=\"plaintext\">&gt;&gt;&gt; info(filename, partition_by_points=10_00_000)<\/span>\r\n<span id=\"LC2\" class=\"line\" lang=\"plaintext\">Filesize on disk: 2.1 GiB<\/span>\r\n<span id=\"LC3\" class=\"line\" lang=\"plaintext\">Is data compressed: True<\/span>\r\n<span id=\"LC4\" class=\"line\" lang=\"plaintext\">records (points): 148919876 (148.9 million)<\/span>\r\n<span id=\"LC5\" class=\"line\" lang=\"plaintext\">Each record size: 54 (54 Bytes)<\/span>\r\n<span id=\"LC6\" class=\"line\" lang=\"plaintext\">Memory required: 8041673304 (7.5 GiB)<\/span>\r\n<span id=\"LC7\" class=\"line\" lang=\"plaintext\">---<\/span>\r\n<span id=\"LC8\" class=\"line\" lang=\"plaintext\">  chunk_size = 54000000 (51.5 MiB)<\/span>\r\n<span id=\"LC9\" class=\"line\" lang=\"plaintext\">  points_per_chunk = 1000000 (1.0 million)<\/span>\r\n<span id=\"LC10\" class=\"line\" lang=\"plaintext\">  nchunks = 149<\/span>\r\n<span id=\"LC11\" class=\"line\" lang=\"plaintext\">---<\/span>\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><strong>chunk by partitions<\/strong><\/p>\n<pre class=\"code highlight\" lang=\"plaintext\"><span id=\"LC1\" class=\"line\" lang=\"plaintext\">&gt;&gt;&gt; info(filename, partition_by_partitions=100)<\/span>\r\n<span id=\"LC2\" class=\"line\" lang=\"plaintext\">Filesize on disk: 2.1 GiB<\/span>\r\n<span id=\"LC3\" class=\"line\" lang=\"plaintext\">Is data compressed: True<\/span>\r\n<span id=\"LC4\" class=\"line\" lang=\"plaintext\">records (points): 148919876 (148.9 million)<\/span>\r\n<span id=\"LC5\" class=\"line\" lang=\"plaintext\">Each record size: 54 (54 Bytes)<\/span>\r\n<span id=\"LC6\" class=\"line\" lang=\"plaintext\">Memory required: 8041673304 (7.5 GiB)<\/span>\r\n<span id=\"LC7\" class=\"line\" lang=\"plaintext\">---<\/span>\r\n<span id=\"LC8\" class=\"line\" lang=\"plaintext\">  chunk_size = 80416800 (76.7 MiB)<\/span>\r\n<span id=\"LC9\" class=\"line\" lang=\"plaintext\">  points_per_chunk = 1489200 (1.5 million)<\/span>\r\n<span id=\"LC10\" class=\"line\" lang=\"plaintext\">  nchunks = 100<\/span>\r\n<span id=\"LC11\" class=\"line\" lang=\"plaintext\">---<\/span>\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p>The <code>info<\/code> function shows how the logical division of file into multiple chunks looks like, the next step is to create a function that actually divides file into chunks.<\/p>\n<h2><a id=\"user-content-dask\" class=\"anchor\" href=\"#dask\" aria-hidden=\"true\"><\/a>Dask<\/h2>\n<pre class=\"code highlight\" lang=\"python\"><span id=\"LC1\" class=\"line\" lang=\"python\"><span class=\"kn\">import<\/span> <span class=\"n\">dask<\/span><\/span>\r\n<span id=\"LC2\" class=\"line\" lang=\"python\"><span class=\"kn\">import<\/span> <span class=\"n\">dask.dataframe<\/span> <span class=\"k\">as<\/span> <span class=\"n\">dd<\/span><\/span>\r\n<span id=\"LC3\" class=\"line\" lang=\"python\"><span class=\"kn\">import<\/span> <span class=\"n\">pandas<\/span> <span class=\"k\">as<\/span> <span class=\"n\">pd<\/span><\/span>\r\n<span id=\"LC4\" class=\"line\" lang=\"python\"><span class=\"kn\">import<\/span> <span class=\"n\">numpy<\/span> <span class=\"k\">as<\/span> <span class=\"n\">np<\/span><\/span>\r\n<span id=\"LC5\" class=\"line\" lang=\"python\"><\/span>\r\n<span id=\"LC6\" class=\"line\" lang=\"python\"><\/span>\r\n<span id=\"LC7\" class=\"line\" lang=\"python\"><span class=\"nd\">@dask.delayed<\/span><\/span>\r\n<span id=\"LC8\" class=\"line\" lang=\"python\"><span class=\"k\">def<\/span> <span class=\"nf\">_dask_offset_reader<\/span><span class=\"p\">(<\/span><span class=\"n\">filename<\/span><span class=\"p\">,<\/span> <span class=\"n\">offset<\/span><span class=\"p\">,<\/span> <span class=\"n\">npoints<\/span><span class=\"p\">,<\/span> <span class=\"n\">scaled<\/span><span class=\"o\">=<\/span><span class=\"bp\">False<\/span><span class=\"p\">,<\/span> <span class=\"n\">func<\/span><span class=\"o\">=<\/span><span class=\"k\">lambda<\/span> <span class=\"n\">x<\/span><span class=\"p\">:<\/span><span class=\"n\">x<\/span><span class=\"p\">):<\/span><\/span>\r\n<span id=\"LC9\" class=\"line\" lang=\"python\">    <span class=\"k\">with<\/span> <span class=\"nf\">open<\/span><span class=\"p\">(<\/span><span class=\"n\">filename<\/span><span class=\"p\">,<\/span> <span class=\"sh\">\"<\/span><span class=\"s\">rb<\/span><span class=\"sh\">\"<\/span><span class=\"p\">)<\/span> <span class=\"k\">as<\/span> <span class=\"n\">fid<\/span><span class=\"p\">:<\/span><\/span>\r\n<span id=\"LC10\" class=\"line\" lang=\"python\">        <span class=\"n\">reader<\/span> <span class=\"o\">=<\/span> <span class=\"n\">laspy<\/span><span class=\"p\">.<\/span><span class=\"nc\">LasReader<\/span><span class=\"p\">(<\/span><span class=\"n\">fid<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC11\" class=\"line\" lang=\"python\">        <span class=\"n\">reader<\/span><span class=\"p\">.<\/span><span class=\"nf\">seek<\/span><span class=\"p\">(<\/span><span class=\"n\">offset<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC12\" class=\"line\" lang=\"python\">        <span class=\"n\">pts<\/span> <span class=\"o\">=<\/span> <span class=\"n\">reader<\/span><span class=\"p\">.<\/span><span class=\"nf\">read_points<\/span><span class=\"p\">(<\/span><span class=\"n\">npoints<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC13\" class=\"line\" lang=\"python\">        <span class=\"k\">if<\/span> <span class=\"n\">scaled<\/span><span class=\"p\">:<\/span><\/span>\r\n<span id=\"LC14\" class=\"line\" lang=\"python\">            <span class=\"n\">xyz_scaled<\/span> <span class=\"o\">=<\/span> <span class=\"p\">{<\/span><span class=\"sh\">'<\/span><span class=\"s\">x<\/span><span class=\"sh\">'<\/span><span class=\"p\">:<\/span> <span class=\"n\">pts<\/span><span class=\"p\">.<\/span><span class=\"n\">x<\/span><span class=\"p\">.<\/span><span class=\"nf\">scaled_array<\/span><span class=\"p\">(),<\/span><\/span>\r\n<span id=\"LC15\" class=\"line\" lang=\"python\">                          <span class=\"sh\">'<\/span><span class=\"s\">y<\/span><span class=\"sh\">'<\/span><span class=\"p\">:<\/span> <span class=\"n\">pts<\/span><span class=\"p\">.<\/span><span class=\"n\">y<\/span><span class=\"p\">.<\/span><span class=\"nf\">scaled_array<\/span><span class=\"p\">(),<\/span><\/span>\r\n<span id=\"LC16\" class=\"line\" lang=\"python\">                          <span class=\"sh\">'<\/span><span class=\"s\">z<\/span><span class=\"sh\">'<\/span><span class=\"p\">:<\/span> <span class=\"n\">pts<\/span><span class=\"p\">.<\/span><span class=\"n\">z<\/span><span class=\"p\">.<\/span><span class=\"nf\">scaled_array<\/span><span class=\"p\">()}<\/span><\/span>\r\n<span id=\"LC17\" class=\"line\" lang=\"python\">        <span class=\"n\">data<\/span> <span class=\"o\">=<\/span> <span class=\"n\">pts<\/span><span class=\"p\">.<\/span><span class=\"n\">array<\/span><\/span>\r\n<span id=\"LC18\" class=\"line\" lang=\"python\">    <span class=\"n\">d<\/span> <span class=\"o\">=<\/span> <span class=\"n\">pd<\/span><span class=\"p\">.<\/span><span class=\"nc\">DataFrame<\/span><span class=\"p\">(<\/span><span class=\"n\">data<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC19\" class=\"line\" lang=\"python\">    <span class=\"n\">d<\/span><span class=\"p\">.<\/span><span class=\"n\">index<\/span> <span class=\"o\">=<\/span> <span class=\"n\">np<\/span><span class=\"p\">.<\/span><span class=\"nf\">arange<\/span><span class=\"p\">(<\/span><span class=\"n\">offset<\/span><span class=\"p\">,<\/span> <span class=\"n\">offset<\/span><span class=\"o\">+<\/span><span class=\"n\">npoints<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC20\" class=\"line\" lang=\"python\">    <span class=\"k\">if<\/span> <span class=\"n\">scaled<\/span><span class=\"p\">:<\/span><\/span>\r\n<span id=\"LC21\" class=\"line\" lang=\"python\">        <span class=\"n\">d<\/span><span class=\"p\">[<\/span><span class=\"sh\">'<\/span><span class=\"s\">x<\/span><span class=\"sh\">'<\/span><span class=\"p\">]<\/span> <span class=\"o\">=<\/span> <span class=\"n\">xyz_scaled<\/span><span class=\"p\">[<\/span><span class=\"sh\">'<\/span><span class=\"s\">x<\/span><span class=\"sh\">'<\/span><span class=\"p\">]<\/span><\/span>\r\n<span id=\"LC22\" class=\"line\" lang=\"python\">        <span class=\"n\">d<\/span><span class=\"p\">[<\/span><span class=\"sh\">'<\/span><span class=\"s\">y<\/span><span class=\"sh\">'<\/span><span class=\"p\">]<\/span> <span class=\"o\">=<\/span> <span class=\"n\">xyz_scaled<\/span><span class=\"p\">[<\/span><span class=\"sh\">'<\/span><span class=\"s\">y<\/span><span class=\"sh\">'<\/span><span class=\"p\">]<\/span><\/span>\r\n<span id=\"LC23\" class=\"line\" lang=\"python\">        <span class=\"n\">d<\/span><span class=\"p\">[<\/span><span class=\"sh\">'<\/span><span class=\"s\">z<\/span><span class=\"sh\">'<\/span><span class=\"p\">]<\/span> <span class=\"o\">=<\/span> <span class=\"n\">xyz_scaled<\/span><span class=\"p\">[<\/span><span class=\"sh\">'<\/span><span class=\"s\">z<\/span><span class=\"sh\">'<\/span><span class=\"p\">]<\/span><\/span>\r\n<span id=\"LC24\" class=\"line\" lang=\"python\">    <span class=\"n\">d<\/span> <span class=\"o\">=<\/span> <span class=\"nf\">func<\/span><span class=\"p\">(<\/span><span class=\"n\">d<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC25\" class=\"line\" lang=\"python\">    <span class=\"k\">return<\/span> <span class=\"n\">d<\/span><\/span>\r\n<span id=\"LC26\" class=\"line\" lang=\"python\"><\/span>\r\n<span id=\"LC27\" class=\"line\" lang=\"python\"><\/span>\r\n<span id=\"LC28\" class=\"line\" lang=\"python\"><span class=\"k\">def<\/span> <span class=\"nf\">dask_reader<\/span><span class=\"p\">(<\/span><\/span>\r\n<span id=\"LC29\" class=\"line\" lang=\"python\">        <span class=\"n\">filename<\/span><span class=\"p\">,<\/span><\/span>\r\n<span id=\"LC30\" class=\"line\" lang=\"python\">        <span class=\"n\">partition_by_size<\/span><span class=\"o\">=<\/span><span class=\"bp\">None<\/span><span class=\"p\">,<\/span><\/span>\r\n<span id=\"LC31\" class=\"line\" lang=\"python\">        <span class=\"n\">partition_by_points<\/span><span class=\"o\">=<\/span><span class=\"bp\">None<\/span><span class=\"p\">,<\/span><\/span>\r\n<span id=\"LC32\" class=\"line\" lang=\"python\">        <span class=\"n\">partition_by_partitions<\/span><span class=\"o\">=<\/span><span class=\"bp\">None<\/span><span class=\"p\">,<\/span><\/span>\r\n<span id=\"LC33\" class=\"line\" lang=\"python\">        <span class=\"n\">scaled<\/span><span class=\"o\">=<\/span><span class=\"bp\">False<\/span><span class=\"p\">,<\/span><\/span>\r\n<span id=\"LC34\" class=\"line\" lang=\"python\">        <span class=\"n\">func<\/span><span class=\"o\">=<\/span><span class=\"k\">lambda<\/span> <span class=\"n\">x<\/span><span class=\"p\">:<\/span><span class=\"n\">x<\/span><span class=\"p\">,<\/span><\/span>\r\n<span id=\"LC35\" class=\"line\" lang=\"python\"><span class=\"p\">):<\/span><\/span>\r\n<span id=\"LC36\" class=\"line\" lang=\"python\">    <span class=\"n\">batch_points<\/span> <span class=\"o\">=<\/span> <span class=\"nf\">info<\/span><span class=\"p\">(<\/span><\/span>\r\n<span id=\"LC37\" class=\"line\" lang=\"python\">        <span class=\"n\">filename<\/span><span class=\"p\">,<\/span><\/span>\r\n<span id=\"LC38\" class=\"line\" lang=\"python\">        <span class=\"n\">partition_by_size<\/span><span class=\"p\">,<\/span><\/span>\r\n<span id=\"LC39\" class=\"line\" lang=\"python\">        <span class=\"n\">partition_by_points<\/span><span class=\"p\">,<\/span><\/span>\r\n<span id=\"LC40\" class=\"line\" lang=\"python\">        <span class=\"n\">partition_by_partitions<\/span><span class=\"p\">,<\/span><\/span>\r\n<span id=\"LC41\" class=\"line\" lang=\"python\">        <span class=\"n\">echo<\/span><span class=\"o\">=<\/span><span class=\"bp\">False<\/span><span class=\"p\">,<\/span><\/span>\r\n<span id=\"LC42\" class=\"line\" lang=\"python\">        <span class=\"n\">return_batch_points<\/span><span class=\"o\">=<\/span><span class=\"bp\">True<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC43\" class=\"line\" lang=\"python\"><\/span>\r\n<span id=\"LC44\" class=\"line\" lang=\"python\">    <span class=\"k\">with<\/span> <span class=\"nf\">open<\/span><span class=\"p\">(<\/span><span class=\"n\">filename<\/span><span class=\"p\">,<\/span> <span class=\"sh\">\"<\/span><span class=\"s\">rb<\/span><span class=\"sh\">\"<\/span><span class=\"p\">)<\/span> <span class=\"k\">as<\/span> <span class=\"n\">fid<\/span><span class=\"p\">:<\/span><\/span>\r\n<span id=\"LC45\" class=\"line\" lang=\"python\">        <span class=\"n\">reader<\/span> <span class=\"o\">=<\/span> <span class=\"n\">laspy<\/span><span class=\"p\">.<\/span><span class=\"nc\">LasReader<\/span><span class=\"p\">(<\/span><span class=\"n\">fid<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC46\" class=\"line\" lang=\"python\">        <span class=\"n\">dtype<\/span> <span class=\"o\">=<\/span> <span class=\"n\">reader<\/span><span class=\"p\">.<\/span><span class=\"n\">header<\/span><span class=\"p\">.<\/span><span class=\"n\">point_format<\/span><span class=\"p\">.<\/span><span class=\"nf\">dtype<\/span><span class=\"p\">()<\/span><\/span>\r\n<span id=\"LC47\" class=\"line\" lang=\"python\">        <span class=\"c1\">#meta = pd.DataFrame(np.empty(0, dtype=dtype))<\/span><\/span>\r\n<span id=\"LC48\" class=\"line\" lang=\"python\">        <span class=\"n\">npoints<\/span> <span class=\"o\">=<\/span> <span class=\"n\">reader<\/span><span class=\"p\">.<\/span><span class=\"n\">header<\/span><span class=\"p\">.<\/span><span class=\"n\">point_count<\/span><\/span>\r\n<span id=\"LC49\" class=\"line\" lang=\"python\">        <span class=\"n\">pairs<\/span> <span class=\"o\">=<\/span> <span class=\"p\">[(<\/span><span class=\"n\">batch_points<\/span><span class=\"p\">,<\/span> <span class=\"n\">offset<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC50\" class=\"line\" lang=\"python\">                 <span class=\"k\">for<\/span> <span class=\"n\">offset<\/span> <span class=\"ow\">in<\/span> <span class=\"nf\">range<\/span><span class=\"p\">(<\/span><span class=\"mi\">0<\/span><span class=\"p\">,<\/span> <span class=\"n\">npoints<\/span><span class=\"p\">,<\/span> <span class=\"n\">batch_points<\/span><span class=\"p\">)]<\/span><\/span>\r\n<span id=\"LC51\" class=\"line\" lang=\"python\">        <span class=\"n\">batch_pts<\/span><span class=\"p\">,<\/span> <span class=\"n\">offset<\/span> <span class=\"o\">=<\/span> <span class=\"n\">pairs<\/span><span class=\"p\">.<\/span><span class=\"nf\">pop<\/span><span class=\"p\">(<\/span><span class=\"o\">-<\/span><span class=\"mi\">1<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC52\" class=\"line\" lang=\"python\">        <span class=\"n\">batch_pts<\/span> <span class=\"o\">=<\/span> <span class=\"n\">npoints<\/span> <span class=\"o\">-<\/span> <span class=\"n\">offset<\/span><\/span>\r\n<span id=\"LC53\" class=\"line\" lang=\"python\">        <span class=\"n\">pairs<\/span><span class=\"p\">.<\/span><span class=\"nf\">append<\/span><span class=\"p\">((<\/span><span class=\"n\">batch_pts<\/span><span class=\"p\">,<\/span> <span class=\"n\">offset<\/span><span class=\"p\">))<\/span><\/span>\r\n<span id=\"LC54\" class=\"line\" lang=\"python\"><\/span>\r\n<span id=\"LC55\" class=\"line\" lang=\"python\">    <span class=\"n\">lazy_load<\/span> <span class=\"o\">=<\/span> <span class=\"p\">[]<\/span><\/span>\r\n<span id=\"LC56\" class=\"line\" lang=\"python\">    <span class=\"k\">for<\/span> <span class=\"n\">points<\/span><span class=\"p\">,<\/span> <span class=\"n\">offset<\/span> <span class=\"ow\">in<\/span> <span class=\"n\">pairs<\/span><span class=\"p\">:<\/span><\/span>\r\n<span id=\"LC57\" class=\"line\" lang=\"python\">        <span class=\"n\">lazy_func<\/span> <span class=\"o\">=<\/span> <span class=\"nf\">_dask_offset_reader<\/span><span class=\"p\">(<\/span><span class=\"n\">filename<\/span><span class=\"p\">,<\/span> <span class=\"n\">offset<\/span><span class=\"p\">,<\/span> <span class=\"n\">points<\/span><span class=\"p\">,<\/span> <span class=\"n\">scaled<\/span><span class=\"o\">=<\/span><span class=\"n\">scaled<\/span><span class=\"p\">,<\/span> <span class=\"n\">func<\/span><span class=\"o\">=<\/span><span class=\"n\">func<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC58\" class=\"line\" lang=\"python\">        <span class=\"n\">lazy_load<\/span><span class=\"p\">.<\/span><span class=\"nf\">append<\/span><span class=\"p\">(<\/span><span class=\"n\">lazy_func<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC59\" class=\"line\" lang=\"python\">    <span class=\"n\">meta_dtype<\/span> <span class=\"o\">=<\/span> <span class=\"nf\">_dask_offset_reader<\/span><span class=\"p\">(<\/span><span class=\"n\">filename<\/span><span class=\"p\">,<\/span> <span class=\"mi\">1<\/span><span class=\"p\">,<\/span> <span class=\"mi\">1<\/span><span class=\"p\">,<\/span> <span class=\"n\">scaled<\/span><span class=\"o\">=<\/span><span class=\"n\">scaled<\/span><span class=\"p\">,<\/span> <span class=\"n\">func<\/span><span class=\"o\">=<\/span><span class=\"n\">func<\/span><span class=\"p\">).<\/span><span class=\"nf\">compute<\/span><span class=\"p\">().<\/span><span class=\"nf\">to_records<\/span><span class=\"p\">().<\/span><span class=\"n\">dtype<\/span><\/span>\r\n<span id=\"LC60\" class=\"line\" lang=\"python\">    <span class=\"n\">meta<\/span> <span class=\"o\">=<\/span> <span class=\"n\">pd<\/span><span class=\"p\">.<\/span><span class=\"nc\">DataFrame<\/span><span class=\"p\">(<\/span><span class=\"n\">np<\/span><span class=\"p\">.<\/span><span class=\"nf\">empty<\/span><span class=\"p\">(<\/span><span class=\"mi\">0<\/span><span class=\"p\">,<\/span> <span class=\"n\">dtype<\/span><span class=\"o\">=<\/span><span class=\"n\">meta_dtype<\/span><span class=\"p\">))<\/span><\/span>\r\n<span id=\"LC61\" class=\"line\" lang=\"python\">    <span class=\"k\">return<\/span> <span class=\"n\">dd<\/span><span class=\"p\">.<\/span><span class=\"nf\">from_delayed<\/span><span class=\"p\">(<\/span><span class=\"n\">lazy_load<\/span><span class=\"p\">,<\/span> <span class=\"n\">meta<\/span><span class=\"o\">=<\/span><span class=\"n\">meta<\/span><span class=\"p\">)<\/span><\/span>\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p dir=\"auto\" data-sourcepos=\"278:1-278:143\"><code>dask_reader<\/code> function divides the file into chunks and lazily loads them on demand. Here is an workflow on how to setup dask to load this data<\/p>\n<p dir=\"auto\" data-sourcepos=\"280:1-280:34\">Setting up dask cluster and client<\/p>\n<pre class=\"code highlight\" lang=\"plaintext\"><span id=\"LC1\" class=\"line\" lang=\"plaintext\">&gt;&gt;&gt; from dask.distributed import Client, LocalCluster<\/span>\r\n<span id=\"LC2\" class=\"line\" lang=\"plaintext\">&gt;&gt;&gt; cluster = LocalCluster(n_workers=12, threads_per_worker=2)<\/span>\r\n<span id=\"LC3\" class=\"line\" lang=\"plaintext\">&gt;&gt;&gt; client = Client(cluster)<\/span>\r\n<span id=\"LC4\" class=\"line\" lang=\"plaintext\">&gt;&gt;&gt; print(client)<\/span>\r\n<span id=\"LC5\" class=\"line\" lang=\"plaintext\">&lt;Client: 'tcp:\/\/127.0.0.1:33430' processes=12 threads=24, memory=376.31 GiB&gt;<\/span>\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p>Reading data<\/p>\n<pre class=\"code highlight\" lang=\"plaintext\"><span id=\"LC1\" class=\"line\" lang=\"plaintext\">&gt;&gt;&gt; df = dask_reader(filename, partition_by_size='200MiB')<\/span>\r\n<span id=\"LC2\" class=\"line\" lang=\"plaintext\">&gt;&gt;&gt; print(df.head())<\/span>\r\n<span id=\"LC3\" class=\"line\" lang=\"plaintext\">            X       Y       Z  intensity  bit_fields  classification_flags  \\<\/span>\r\n<span id=\"LC4\" class=\"line\" lang=\"plaintext\">    0  237355  566872  368473         13          17                     0   <\/span>\r\n<span id=\"LC5\" class=\"line\" lang=\"plaintext\">    1  258448  570271  368929          0          33                     0   <\/span>\r\n<span id=\"LC6\" class=\"line\" lang=\"plaintext\">    2  236036  565853  365470         19          34                     0   <\/span>\r\n<span id=\"LC7\" class=\"line\" lang=\"plaintext\">    3  270386  571869  368003         44          17                     0   <\/span>\r\n<span id=\"LC8\" class=\"line\" lang=\"plaintext\">    4  267408  570521  364769         17          33                     0   <\/span>\r\n<span id=\"LC9\" class=\"line\" lang=\"plaintext\">    <\/span>\r\n<span id=\"LC10\" class=\"line\" lang=\"plaintext\">       classification  user_data  scan_angle  point_source_id    gps_time  \\<\/span>\r\n<span id=\"LC11\" class=\"line\" lang=\"plaintext\">    0               0          0           0                0  290384.293   <\/span>\r\n<span id=\"LC12\" class=\"line\" lang=\"plaintext\">    1               0          0           0                0  290384.293   <\/span>\r\n<span id=\"LC13\" class=\"line\" lang=\"plaintext\">    2               0          0           0                0  290384.293   <\/span>\r\n<span id=\"LC14\" class=\"line\" lang=\"plaintext\">    3               0          0           0                0  290384.293   <\/span>\r\n<span id=\"LC15\" class=\"line\" lang=\"plaintext\">    4               0          0           0                0  290384.293   <\/span>\r\n<span id=\"LC16\" class=\"line\" lang=\"plaintext\">    <\/span>\r\n<span id=\"LC17\" class=\"line\" lang=\"plaintext\">       echo_width  fullwaveIndex  hitObjectId  heliosAmplitude  <\/span>\r\n<span id=\"LC18\" class=\"line\" lang=\"plaintext\">    0         0.0        2829327        21450       201.914284  <\/span>\r\n<span id=\"LC19\" class=\"line\" lang=\"plaintext\">    1         0.0        2829328        21450         7.727883  <\/span>\r\n<span id=\"LC20\" class=\"line\" lang=\"plaintext\">    2         0.0        2829328        21450       292.342904  <\/span>\r\n<span id=\"LC21\" class=\"line\" lang=\"plaintext\">    3         0.0        2829329        21450       684.970443  <\/span>\r\n<span id=\"LC22\" class=\"line\" lang=\"plaintext\">    4         0.0        2829330        21450       271.545107  <\/span>\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p>So, the question is, is there any benefit of using dask in terms of speed? Timing the operations: dask vs non-dask<\/p>\n<pre class=\"code highlight\" lang=\"python\"><span id=\"LC1\" class=\"line\" lang=\"python\"><span class=\"o\">%%<\/span><span class=\"n\">time<\/span><\/span>\r\n<span id=\"LC2\" class=\"line\" lang=\"python\"><span class=\"n\">hitobjectids<\/span> <span class=\"o\">=<\/span> <span class=\"n\">df<\/span><span class=\"p\">[<\/span><span class=\"sh\">'<\/span><span class=\"s\">hitObjectId<\/span><span class=\"sh\">'<\/span><span class=\"p\">].<\/span><span class=\"nf\">unique<\/span><span class=\"p\">().<\/span><span class=\"nf\">compute<\/span><span class=\"p\">()<\/span><\/span>\r\n<\/pre>\n<p>&nbsp;<\/p>\n<pre class=\"code highlight\" lang=\"plaintext\"><span id=\"LC1\" class=\"line\" lang=\"plaintext\">CPU times: user 642 ms, sys: 324 ms, total: 966 ms<\/span>\r\n<span id=\"LC2\" class=\"line\" lang=\"plaintext\">Wall time: 8.52 s<\/span>\r\n<\/pre>\n<p>&nbsp;<\/p>\n<pre class=\"code highlight\" lang=\"python\"><span id=\"LC1\" class=\"line\" lang=\"python\"><span class=\"o\">%%<\/span><span class=\"n\">time<\/span><\/span>\r\n<span id=\"LC2\" class=\"line\" lang=\"python\"><span class=\"n\">data<\/span> <span class=\"o\">=<\/span> <span class=\"n\">laspy<\/span><span class=\"p\">.<\/span><span class=\"nf\">read<\/span><span class=\"p\">(<\/span><span class=\"n\">filename<\/span><span class=\"p\">)<\/span><\/span>\r\n<span id=\"LC3\" class=\"line\" lang=\"python\"><span class=\"n\">h<\/span> <span class=\"o\">=<\/span> <span class=\"n\">np<\/span><span class=\"p\">.<\/span><span class=\"nf\">unique<\/span><span class=\"p\">(<\/span><span class=\"n\">data<\/span><span class=\"p\">[<\/span><span class=\"sh\">'<\/span><span class=\"s\">hitObjectId<\/span><span class=\"sh\">'<\/span><span class=\"p\">])<\/span><\/span>\r\n<\/pre>\n<p>&nbsp;<\/p>\n<pre class=\"code highlight\" lang=\"plaintext\"><span id=\"LC1\" class=\"line\" lang=\"plaintext\">CPU times: user 2min 10s, sys: 4.06 s, total: 2min 14s<\/span>\r\n<span id=\"LC2\" class=\"line\" lang=\"plaintext\">Wall time: 13.6 s<\/span>\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p dir=\"auto\" data-sourcepos=\"340:1-340:310\">Well, with Dask performance is little bit better but not significant. There are 2 things happening here, loading the data from disk (I\/O operation) and some analysis operation following that. Rather than measuring the timing for these operations combined, consider measuring the timing for analysis part alone.<\/p>\n<p dir=\"auto\" data-sourcepos=\"342:1-342:34\">Loading data into memory with Dask<\/p>\n<pre class=\"code highlight\" lang=\"python\"><span id=\"LC1\" class=\"line\" lang=\"python\"><span class=\"o\">%%<\/span><span class=\"n\">time<\/span><\/span>\r\n<span id=\"LC2\" class=\"line\" lang=\"python\"><span class=\"n\">df<\/span> <span class=\"o\">=<\/span> <span class=\"n\">df<\/span><span class=\"p\">.<\/span><span class=\"nf\">persist<\/span><span class=\"p\">()<\/span><\/span>\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p dir=\"auto\" data-sourcepos=\"340:1-340:310\">Well, with Dask performance is little bit better but not significant. There are 2 things happening here, loading the data from disk (I\/O operation) and some analysis operation following that. Rather than measuring the timing for these operations combined, consider measuring the timing for analysis part alone.<\/p>\n<p dir=\"auto\" data-sourcepos=\"342:1-342:34\">Loading data into memory with Dask<\/p>\n<pre class=\"code highlight\" lang=\"python\"><span id=\"LC1\" class=\"line\" lang=\"python\"><span class=\"o\">%%<\/span><span class=\"n\">time<\/span><\/span>\r\n<span id=\"LC2\" class=\"line\" lang=\"python\"><span class=\"n\">df<\/span> <span class=\"o\">=<\/span> <span class=\"n\">df<\/span><span class=\"p\">.<\/span><span class=\"nf\">persist<\/span><span class=\"p\">()<\/span><\/span>\r\n<\/pre>\n<p>&nbsp;<\/p>\n<pre class=\"code highlight\" lang=\"plaintext\"><span id=\"LC1\" class=\"line\" lang=\"plaintext\">CPU times: user 12.5 ms, sys: 7.97 ms, total: 20.5 ms<\/span>\r\n<span id=\"LC2\" class=\"line\" lang=\"plaintext\">Wall time: 19.1 ms<\/span>\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p dir=\"auto\" data-sourcepos=\"354:1-354:169\">Since <code>persist<\/code> is non-blocking operation, the data loading part happens in the background. It takes couple of seconds for the dask workers to load the data into memory.<\/p>\n<p dir=\"auto\" data-sourcepos=\"356:1-356:62\">After waiting couple of seconds, re-running the analysis part.<\/p>\n<pre class=\"code highlight\" lang=\"python\"><span id=\"LC1\" class=\"line\" lang=\"python\"><span class=\"o\">%%<\/span><span class=\"n\">time<\/span><\/span>\r\n<span id=\"LC2\" class=\"line\" lang=\"python\"><span class=\"n\">hitobjectids<\/span> <span class=\"o\">=<\/span> <span class=\"n\">df<\/span><span class=\"p\">[<\/span><span class=\"sh\">'<\/span><span class=\"s\">hitObjectId<\/span><span class=\"sh\">'<\/span><span class=\"p\">].<\/span><span class=\"nf\">unique<\/span><span class=\"p\">().<\/span><span class=\"nf\">compute<\/span><span class=\"p\">()<\/span><\/span>\r\n<\/pre>\n<p>&nbsp;<\/p>\n<pre class=\"code highlight\" lang=\"plaintext\"><span id=\"LC1\" class=\"line\" lang=\"plaintext\">CPU times: user 47.1 ms, sys: 27.8 ms, total: 74.9 ms<\/span>\r\n<span id=\"LC2\" class=\"line\" lang=\"plaintext\">Wall time: 105 ms<\/span>\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p>re-running the analysis part without dask<\/p>\n<pre class=\"code highlight\" lang=\"python\"><span id=\"LC1\" class=\"line\" lang=\"python\"><span class=\"o\">%%<\/span><span class=\"n\">time<\/span><\/span>\r\n<span id=\"LC2\" class=\"line\" lang=\"python\"><span class=\"n\">h<\/span> <span class=\"o\">=<\/span> <span class=\"n\">np<\/span><span class=\"p\">.<\/span><span class=\"nf\">unique<\/span><span class=\"p\">(<\/span><span class=\"n\">data<\/span><span class=\"p\">[<\/span><span class=\"sh\">'<\/span><span class=\"s\">hitObjectId<\/span><span class=\"sh\">'<\/span><span class=\"p\">])<\/span><\/span>\r\n<\/pre>\n<p>&nbsp;<\/p>\n<pre class=\"code highlight\" lang=\"plaintext\"><span id=\"LC1\" class=\"line\" lang=\"plaintext\">CPU times: user 4.6 s, sys: 677 ms, total: 5.28 s<\/span>\r\n<span id=\"LC2\" class=\"line\" lang=\"plaintext\">Wall time: 4.87 s\r\n<\/span><\/pre>\n<p>Now this is a significant difference in performance. Dask work-flow totally out performs the usual work-flow.<\/p>\n<h2><a id=\"user-content-discussion\" class=\"anchor\" href=\"#discussion\" aria-hidden=\"true\"><\/a>Discussion<\/h2>\n<p dir=\"auto\" data-sourcepos=\"384:1-384:367\">In this particular example the source file requires approximately 9GB of memory to load the whole dataset which comfortabily fits into a single node memory. For larger data-sets which do not fit in a single node, <code>dask_jobqueue<\/code> package enables to do distributed computing. In this case, only the cluster setup part differs and the rest of work-flow remains the same.<\/p>\n<p dir=\"auto\" data-sourcepos=\"386:1-386:312\">The objective of this article is to showcase Dask work-flow with 3D Point Cloud data using chunks feature. As demonstated, with few helper functions, 3D point cloud data can direclty read with dask. Even with the very limited analysis part shown here, it is quite evident to see performance benifits using dask.<\/p>\n<p dir=\"auto\" data-sourcepos=\"388:1-388:306\"><code>dask_reader<\/code> accepts optional <code>scaled<\/code> parameter which scales <code>x<\/code>, <code>y<\/code>, <code>z<\/code> values. The optional <code>func<\/code> parameter is intended to manipulate pandas dataframe. For instance, to return only selected columns. Provide a custom function that receives dataframe as input and the return value must be a dataframe.<\/p>\n<h2 dir=\"auto\" data-sourcepos=\"388:1-388:306\">Author<\/h2>\n<p><a href=\"mailto:hpc@gwdg.de\"><strong>Pavan Siligam<\/strong><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>There are many python packages that are tightly integrated with Dask which enables parallel data processing. For instance, consider xarray package. This package is used to read datasets in netcdf, hdf5, zarr file formats. Dask comes in play when data is read from multile files where each file is treated as a chunk or by &#8230; <a title=\"Parallel 3D Point Cloud Data analysis with Dask\" class=\"read-more\" href=\"https:\/\/info.gwdg.de\/news\/parallel-3d-point-cloud-data-analysis-with-dask\/\" aria-label=\"Mehr Informationen \u00fcber Parallel 3D Point Cloud Data analysis with Dask\">Weiterlesen<\/a><\/p>\n","protected":false},"author":166,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[132,129],"tags":[],"class_list":["post-23518","post","type-post","status-publish","format-standard","hentry","category-forstwirtschaften","category-wissenschaftliche-domaenen"],"_links":{"self":[{"href":"https:\/\/info.gwdg.de\/news\/wp-json\/wp\/v2\/posts\/23518","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/info.gwdg.de\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/info.gwdg.de\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/info.gwdg.de\/news\/wp-json\/wp\/v2\/users\/166"}],"replies":[{"embeddable":true,"href":"https:\/\/info.gwdg.de\/news\/wp-json\/wp\/v2\/comments?post=23518"}],"version-history":[{"count":3,"href":"https:\/\/info.gwdg.de\/news\/wp-json\/wp\/v2\/posts\/23518\/revisions"}],"predecessor-version":[{"id":23522,"href":"https:\/\/info.gwdg.de\/news\/wp-json\/wp\/v2\/posts\/23518\/revisions\/23522"}],"wp:attachment":[{"href":"https:\/\/info.gwdg.de\/news\/wp-json\/wp\/v2\/media?parent=23518"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/info.gwdg.de\/news\/wp-json\/wp\/v2\/categories?post=23518"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/info.gwdg.de\/news\/wp-json\/wp\/v2\/tags?post=23518"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}