I’m having trouble downloading large files from LILA; my browser eventually gives up. Help!
If you’re having download issues, we recommend trying AzCopy, a command-line tool for downloading large files that are stored on Azure. Install AzCopy from here (available for Windows, Linux, and MacOS).
If the URL you’re trying to download is:
…use the following AzCopy command line to copy it to your current directory:
azcopy cp "https://lilablobssc.blob.core.windows.net/mydataset/myfile.zip" "/absolute/path/to/desired/local/dir/myfile.zip"
Note that the destination path needs to be an absolute path on your destination computer, and it is best to surround the URL and the destination path in double-quotes.
What format do you use for metadata on LILA?
We use different formats depending on the nature each data set, but for camera trap data (and we love camera trap data!), we have tried to convert all data to a common format. More on the details of this format in a second, but if you share data on LILA, we’re willing to do the work to get your data into this format, and post that along with your original metadata. This greatly facilitates letting new researchers work with your data.
We use the “COCO Camera Traps” .json format proposed by Beery et al., which is a refinement of the format used by the COCO data set, adding fields specific to camera trap data sets. The format is formally specified here.
I want to confirm that my download wasn’t corrupted… do you publish file sizes and MD5s?
We sure do! See this page.
I want to try my hand at machine learning for conservation, but don’t have the processing power to deal with these big datasets. Do you happen to know a good way to get free compute resources?