You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: project-docs/technotes.rst
+13-23Lines changed: 13 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -56,37 +56,27 @@ Create a reStructuredText technote
56
56
==================================
57
57
58
58
ReStructuredText-formatted technotes are built with Sphinx_ into websites.
59
-
Create a reStructuredText-formatted technote by messaging the SQuaRE Bot on Slack:
60
59
61
-
.. code-block:: text
60
+
1. Follow the instructions at `lsst-technote-bootstrap <https://github.com/lsst-sqre/lsst-technote-bootstrap#running-this-cookiecutter-for-development>`__ to manually create a technote repository.
2. Create a GitHub repository in the appropriate organization with the technote's handle as the name.
63
+
The organizations are:
64
64
65
-
Set the ``<title>``, ``<description>`` and ``<series>`` fields (see below) for your technote, but keep the ``{{`` and ``}}`` delimiters.
65
+
`lsst-dm <https://github.com/lsst-dm>`__
66
+
DMTN series.
66
67
67
-
.. note::
68
-
69
-
In a Direct Message channel with SQuaRE Bot, don't include the ``@sqrbot`` prefix.
68
+
`lsst-sqre <https://github.com/lsst-sqre>`__
69
+
SQR series.
70
70
71
-
The fields are:
71
+
`lsst-sims <https://github.com/lsst-sims>`__
72
+
SMTN series.
72
73
73
-
series
74
-
Values can be ``dmtn``, ``sqr`` or ``smtn``.
75
-
Use ``test`` for testing.
74
+
3. Message the `#dm-docs <https://lsstc.slack.com/archives/dm-docs>`__ Slack channel so that the Travis integration for your technote can be activated.
76
75
77
-
title
78
-
Title of the technote.
79
-
The title doesn't include the handle (DMTN, SQR, or SMTN).
80
-
You can update the title later by modifying the :ref:`metadata file <technote-rst-metadata>`.
81
-
82
-
description
83
-
Short abstract for the technote.
84
-
The description is used both as an abstract in the technote itself and in the technote's README.
85
-
You can update description later by editing the technote and the :ref:`metadata file <technote-rst-metadata>`.
76
+
.. note::
86
77
87
-
SQuaRE Bot prepares technotes in the background after you make your request.
88
-
Go to the GitHub organization for your :ref:`document series <technote-series>` to find your new technote repository.
89
-
Reach out on the `#dm-docs <slack-dm-docs>`_ Slack channel for help.
78
+
Previously you could use a Slack command, ``@sqrbot project create``, to create a reStructuredText technote.
79
+
Due to reliability issues with that service, we recommend that you use this manual process for now.
Copy file name to clipboardExpand all lines: services/lsst-dev.rst
+4-12Lines changed: 4 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -189,18 +189,15 @@ If you are using `Bash`_ — the default shell on ``lsst-dev01`` — try placing
189
189
Load the LSST Environment
190
190
=========================
191
191
192
-
We provide ready-to-use “shared” versions of the LSST software stack to enable developers to get up and running quickly with no installation step.
193
-
Each shared stack includes a fullyfledged Miniconda-based Python environment, a selection of additional development tools, and a selection of builds of the lsst_distrib meta-package.
194
-
The currently maintained stacks are regularly updated to include the latest weekly release, which is tagged as ``current``.
192
+
We provide a ready-to-use “shared” version of the LSST software stack to enable developers to get up and running quickly with no installation step.
193
+
The shared stack includes a fully-fledged Miniconda-based Python environment, a selection of additional development tools, and a selection of builds of the lsst_distrib meta-package.
194
+
The currently stack is regularly updated to include the latest weekly release, which is tagged as ``current``.
:file:`/ssd/lsstsw/stack2_20171021` 2 ``devtoolset-6`` Located on local, SSD based storage attached to the `lsst-dev01` system: it will support fast interactive use on that machine, but is not accessible across the network.
202
-
:file:`/ssd/lsstsw/stack3_20171021` 3 ``devtoolset-6`` Located on local, SSD based storage attached to the `lsst-dev01` system: it will support fast interactive use on that machine, but is not accessible across the network.
203
-
:file:`/software/lsstsw/stack2_20171022` 2 ``devtoolset-6`` Located on GPFS-based network storage; as such, it is cross-mounted across a variety of LSST systems at NCSA including those configured as part of the `HTCondor pool`_ and :doc:`Verification Cluster <verification>`.
204
201
:file:`/software/lsstsw/stack3_20171023` 3 ``devtoolset-6`` Located on GPFS-based network storage; as such, it is cross-mounted across a variety of LSST systems at NCSA including those configured as part of the `HTCondor pool`_ and :doc:`Verification Cluster <verification>`.
Copy file name to clipboardExpand all lines: services/storage.rst
+22-11Lines changed: 22 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,6 +4,12 @@ Storage Resources
4
4
5
5
There are a few other documents that might have the info you are looking for.
6
6
7
+
8
+
- 1. Look to the - :doc:`data_protection` policy page what the retention policy is, what are immutable files, what is to be placed in each file system area.
9
+
- 2. Look to the - :doc:`ldf-resources` LDF resources pages for explaination of each of the file systems, and the type of data and where it's to be located and the policies of each of the file systems.
10
+
11
+
This document covers the file systems, and then quotas for currently only the /home file system that is in place.
12
+
=======
7
13
1. Look to the :doc:`data_protection` policy page what the retention policy is, what are immutable files, what is to be placed in each file system area.
8
14
2. Look to the :doc:`ldf-resources` LDF resources pages for explanation of each of the file systems, and the type of data and where it's to be located and the policies of each of the file systems.
9
15
@@ -15,6 +21,7 @@ Filesystems - in GPFS (4.9PB of storage)
15
21
:file:`/datasets`
16
22
Long term storage of project-approved shared data. Contains immutable data. This is under a disaster recovery policy that every 30 days it is stored and written to nearline tape.
17
23
24
+
18
25
:file:`/home`
19
26
Storage of individual-user data. This data is backed up on a daily basis and NCSA retains 30 days of those backups in a snapshot. It does have quotas on this file system for 1TB for each "directory," and a 1 million INODE quota.
20
27
@@ -33,29 +40,36 @@ Filesystems - in GPFS (4.9PB of storage)
33
40
Quotas
34
41
======
35
42
43
+
44
+
Quotas
45
+
======
46
+
36
47
Your home directory is the default directory you are placed in when you log on. You should use this space for storing files you want to keep long term such as source code, scripts, etc. Every user has a 1TB home directory quota (total space) and 1 million INODE quota (total number of files).
37
48
38
-
On **TIME**, quotas were enforced. The soft limit is 1TB and the hard limit is 1.2 TB. The INODE soft quota is 1 million files and the hard limit is 1.2 million files. If the amount of data in your home directory is over the soft limit but under the hard limit, there is a grace period of 7 days to get under the soft limit. When the grace period expires, you will not be able to write new files or update any current files until you reduce the amount of data to below the soft limit.
49
+
On 6/17/2018, quotas were enforced. The soft limit is 1TB and the hard limit is 1.2 TB. The INODE soft quota is 1 million files and the hard limit is 1.2 million files. If the amount of data in your home directory is over the soft limit but under the hard limit, there is a grace period of 7 days to get under the soft limit. When the grace period expires, you will not be able to write new files or update any current files until you reduce the amount of data to below the soft limit.
50
+
39
51
40
52
The command to see your disk usage and limits is quota. Example:
Home directories are backed up using snapshots and a separate DR process.
56
69
57
-
Data Compression
58
-
================
70
+
For space utiziation: Data Compression
71
+
To reduce space usage in your home directory, an option for files that are not in active use is to compress them. The gzip utility can be used for file compression and decompression. Another alternative is bzip2, which usually yields a better compression ratio than gzip but takes longer to complete. Additionally, files that are typically used together can first be combined into a single file and then compressed using the tar utility.
72
+
59
73
60
74
To reduce space usage in your home directory, an option for files that are not in active use is to compress them. The :command:`gzip` utility can be used for file compression and decompression. Another alternative is :command:`bzip2`, which usually yields a better compression ratio than gzip but takes longer to complete. Additionally, files that are typically used together can first be combined into a single file and then compressed using the tar utility.
61
75
@@ -74,6 +88,7 @@ To decompress the file:
74
88
75
89
.. code-block:: bash
76
90
91
+
77
92
gunzip largefile.dat.gz
78
93
79
94
Alternatively:
@@ -100,13 +115,9 @@ The convention is to use extension ``.tgz`` in the file name.
100
115
101
116
To extract the contents of the compressed tar file:
102
117
103
-
.. code-block:: bash
104
-
105
-
tar -xvf largedir.tgz
106
118
107
-
See the manual pages (``man gzip``, ``man bzip2``, ``man tar``) for more details on these utilities.
119
+
**Notes:**
108
120
109
-
.. note::
110
121
111
122
ASCII text and binary files like executables can yield good compression ratios. Image file formats (gif, jpg, png, etc.) are already natively compressed so further compression will not yield much gains.
112
123
Depending on the size of the files, the compression utilities can be compute intensive and take a while to complete. Use the compute nodes via a batch job for compressing large files.
0 commit comments