Skip to content

Commit f961735

Browse files
authored
Merge branch 'master' into aloftus-update-process-for-request-immutability
2 parents 0d52f93 + 2810482 commit f961735

File tree

12 files changed

+69
-81
lines changed

12 files changed

+69
-81
lines changed

cpp/style.rst

Lines changed: 16 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -807,26 +807,27 @@ Arguments common to all overloads of a function should precede those that differ
807807

808808
.. _style-guide-cpp-3-34:
809809

810-
3-34. Uncertainty values associated with a variable SHOULD be suffixed by one of ``Var``, ``Cov``, ``Sigma``.
811-
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
812-
813-
There is no universal suffix for uncertainties; i.e. no ``Err`` suffix will be used.
814-
The cases that we have identified, and their appropriate suffixes, are:
810+
3-34. Uncertainty values associated with a variable SHOULD be suffixed by one of ``Err`` or ``Cov``
811+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
815812

816-
- Standard deviation: ``Sigma`` (not ``Rms``, as ``rms`` doesn't imply that the mean's subtracted)
817-
- Covariance: ``Cov``
818-
- Variance: ``Var``
813+
Following the DPDD, we use ``Err`` (not ``Sigma``) to specify error quantities, as a standard deviation.
814+
``Sigma`` should be used to specify the inherent width of a distribution or function.
819815

820816
.. code-block:: cpp
821817
822-
float xAstrom; // x position computed by a centroiding algorithm
823-
float xAstromSigma; // Uncertainty of xAstrom
824-
float yAstrom;
825-
float yAstromSigma;
826-
float xyAstromCov;
818+
// for single measurements
819+
float xCentroid; // x position computed by a centroiding algorithm
820+
float xCentroidErr; // Uncertainty of xCentroid
821+
float yCentroid;
822+
float yCentroidErr;
823+
float xyCentroidCov; // Covariance of x/y centroid
824+
825+
// for distributions
826+
float fpFluxMean; // Weighted mean of forced-photometry flux (fpFlux)
827+
float fpFluxMeanErr; // Uncertainty (standard deviation) of fpFluxMean.
828+
float fpFluxSigma; // Standard deviation of the distribution of fpFlux.
827829
828-
The postfix ``Err`` can easily be misinterpreted as error flags.
829-
Use the full ``Sigma`` since ``Sig`` can easily be misinterpreted as ``Signal``.
830+
For distribution widths, use the full ``Sigma`` since ``Sig`` can easily be misinterpreted as ``Signal``.
830831

831832
.. _style-guide-cpp-3-35:
832833

git/git-lfs.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ Trying cloning a small data repository to test your configuration:
7979

8080
.. code-block:: bash
8181
82-
git clone https://github.com/lsst/testdata_decam
82+
git clone https://github.com/lsst/testdata_subaru
8383
8484
*That's it.*
8585

legal/copyright.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -171,7 +171,7 @@ def stringify_year_set(years):
171171
if len(sys.argv) > 1:
172172
git_cmd += sys.argv[1:]
173173

174-
log = subprocess.check_output(git_cmd)
174+
log = subprocess.check_output(git_cmd, universal_newlines=True)
175175

176176
# Read in the full hashes of any commits deemed not copyrightable.
177177
insignificant_list = []

project-docs/technotes.rst

Lines changed: 13 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -56,37 +56,27 @@ Create a reStructuredText technote
5656
==================================
5757

5858
ReStructuredText-formatted technotes are built with Sphinx_ into websites.
59-
Create a reStructuredText-formatted technote by messaging the SQuaRE Bot on Slack:
6059

61-
.. code-block:: text
60+
1. Follow the instructions at `lsst-technote-bootstrap <https://github.com/lsst-sqre/lsst-technote-bootstrap#running-this-cookiecutter-for-development>`__ to manually create a technote repository.
6261

63-
@sqrbot project create technote series={{<series>}} title={{<title>}} description={{<description>}}
62+
2. Create a GitHub repository in the appropriate organization with the technote's handle as the name.
63+
The organizations are:
6464

65-
Set the ``<title>``, ``<description>`` and ``<series>`` fields (see below) for your technote, but keep the ``{{`` and ``}}`` delimiters.
65+
`lsst-dm <https://github.com/lsst-dm>`__
66+
DMTN series.
6667

67-
.. note::
68-
69-
In a Direct Message channel with SQuaRE Bot, don't include the ``@sqrbot`` prefix.
68+
`lsst-sqre <https://github.com/lsst-sqre>`__
69+
SQR series.
7070

71-
The fields are:
71+
`lsst-sims <https://github.com/lsst-sims>`__
72+
SMTN series.
7273

73-
series
74-
Values can be ``dmtn``, ``sqr`` or ``smtn``.
75-
Use ``test`` for testing.
74+
3. Message the `#dm-docs <https://lsstc.slack.com/archives/dm-docs>`__ Slack channel so that the Travis integration for your technote can be activated.
7675

77-
title
78-
Title of the technote.
79-
The title doesn't include the handle (DMTN, SQR, or SMTN).
80-
You can update the title later by modifying the :ref:`metadata file <technote-rst-metadata>`.
81-
82-
description
83-
Short abstract for the technote.
84-
The description is used both as an abstract in the technote itself and in the technote's README.
85-
You can update description later by editing the technote and the :ref:`metadata file <technote-rst-metadata>`.
76+
.. note::
8677

87-
SQuaRE Bot prepares technotes in the background after you make your request.
88-
Go to the GitHub organization for your :ref:`document series <technote-series>` to find your new technote repository.
89-
Reach out on the `#dm-docs <slack-dm-docs>`_ Slack channel for help.
78+
Previously you could use a Slack command, ``@sqrbot project create``, to create a reStructuredText technote.
79+
Due to reliability issues with that service, we recommend that you use this manual process for now.
9080

9181
.. _Sphinx: http://www.sphinx-doc.org/en/stable/
9282
.. _stack-dm-docs: https://lsstc.slack.com/messages/C2B6DQBAL/

python/numpydoc.rst

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1053,6 +1053,13 @@ Class, method, and function docstrings must be placed directly below the declara
10531053
Again, the :ref:`class docstring <py-docstring-class-structure>` takes the place of a docstring for the ``__init__`` method.
10541054
``__init__`` methods don't have docstrings.
10551055

1056+
Dunder Methods
1057+
--------------
1058+
1059+
Special "dunder" methods on classes only need to have docstrings if they are doing anything non-standard.
1060+
For example, if a ``__getslice__`` method cannot take negative indices, that should be noted.
1061+
But if ``__ge__`` returns true if ``self`` is greater than or equal to the argument, that need not be documented.
1062+
10561063
Examples of Method and Function Docstrings
10571064
------------------------------------------
10581065

requirements.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
11
sphinx_rtd_theme==0.2.0
2-
documenteer==0.3.0a4
2+
documenteer==0.3.0

services/ldf-resources.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Machines: (includes VM and Baremetal)
1919
- VM server machines - running IBM's Vsphere software. (@ NCSA 3003, NCSA NPCF, and Chile Base)
2020
- virtual machines for numberous development environments
2121
- databackbone test beds, container type test beds, demo type machines, monitoring machines
22-
- lsst-demo - VM for demos
22+
- lsst-demo - VM for demos and docker image needs
2323
- ATS gateway - VM
2424
- lsstdb - mysql infrastucture machine for db support
2525
- DAQ - camera simulator systems

services/lsst-dev.rst

Lines changed: 4 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -189,18 +189,15 @@ If you are using `Bash`_ — the default shell on ``lsst-dev01`` — try placing
189189
Load the LSST Environment
190190
=========================
191191

192-
We provide ready-to-use “shared” versions of the LSST software stack to enable developers to get up and running quickly with no installation step.
193-
Each shared stack includes a fully fledged Miniconda-based Python environment, a selection of additional development tools, and a selection of builds of the lsst_distrib meta-package.
194-
The currently maintained stacks are regularly updated to include the latest weekly release, which is tagged as ``current``.
192+
We provide a ready-to-use “shared” version of the LSST software stack to enable developers to get up and running quickly with no installation step.
193+
The shared stack includes a fully-fledged Miniconda-based Python environment, a selection of additional development tools, and a selection of builds of the lsst_distrib meta-package.
194+
The currently stack is regularly updated to include the latest weekly release, which is tagged as ``current``.
195195

196196
The following stacks are currently maintained:
197197

198198
======================================== ============== ================ =======================================================================================================================================================================================================================
199199
Path Python Version Toolchain Description
200200
======================================== ============== ================ =======================================================================================================================================================================================================================
201-
:file:`/ssd/lsstsw/stack2_20171021` 2 ``devtoolset-6`` Located on local, SSD based storage attached to the `lsst-dev01` system: it will support fast interactive use on that machine, but is not accessible across the network.
202-
:file:`/ssd/lsstsw/stack3_20171021` 3 ``devtoolset-6`` Located on local, SSD based storage attached to the `lsst-dev01` system: it will support fast interactive use on that machine, but is not accessible across the network.
203-
:file:`/software/lsstsw/stack2_20171022` 2 ``devtoolset-6`` Located on GPFS-based network storage; as such, it is cross-mounted across a variety of LSST systems at NCSA including those configured as part of the `HTCondor pool`_ and :doc:`Verification Cluster <verification>`.
204201
:file:`/software/lsstsw/stack3_20171023` 3 ``devtoolset-6`` Located on GPFS-based network storage; as such, it is cross-mounted across a variety of LSST systems at NCSA including those configured as part of the `HTCondor pool`_ and :doc:`Verification Cluster <verification>`.
205202
======================================== ============== ================ =======================================================================================================================================================================================================================
206203

@@ -213,19 +210,14 @@ In addition, the following symbolic links point to particular versions of the st
213210
=============================== =====================================================================================================
214211
Path Description
215212
=============================== =====================================================================================================
216-
:file:`/ssd/lsstsw/stack` The latest version of the stack on local storage using our standard Python version (currently 3).
217-
:file:`/ssd/lsstsw/stack2` The latest version of the stack on local storage and based on Python 2.
218-
:file:`/ssd/lsstsw/stack3` The latest version of the stack on local storage and based on Python 3.
219213
:file:`/software/lsstsw/stack` The latest version of the stack on networked storage using our standard Python version (currently 3).
220-
:file:`/software/lsstsw/stack2` The latest version of the stack on networked storage and based on Python 2.
221-
:file:`/software/lsstsw/stack3` The latest version of the stack on networked storage and based on Python 3.
222214
=============================== =====================================================================================================
223215

224216
Add a shared stack to your environment and set up the latest build of the LSST applications by running, for example:
225217

226218
.. prompt:: bash
227219

228-
source /ssd/lsstsw/stack/loadLSST.bash
220+
source /software/lsstsw/stack/loadLSST.bash
229221
setup lsst_apps
230222

231223
(substitute :file:`loadLSST.csh`, :file:`loadLSST.ksh` or :file:`loadLSST.zsh`, depending on your preferred shell).

services/storage.rst

Lines changed: 22 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,12 @@ Storage Resources
44

55
There are a few other documents that might have the info you are looking for.
66

7+
8+
- 1. Look to the - :doc:`data_protection` policy page what the retention policy is, what are immutable files, what is to be placed in each file system area.
9+
- 2. Look to the - :doc:`ldf-resources` LDF resources pages for explaination of each of the file systems, and the type of data and where it's to be located and the policies of each of the file systems.
10+
11+
This document covers the file systems, and then quotas for currently only the /home file system that is in place.
12+
=======
713
1. Look to the :doc:`data_protection` policy page what the retention policy is, what are immutable files, what is to be placed in each file system area.
814
2. Look to the :doc:`ldf-resources` LDF resources pages for explanation of each of the file systems, and the type of data and where it's to be located and the policies of each of the file systems.
915

@@ -15,6 +21,7 @@ Filesystems - in GPFS (4.9PB of storage)
1521
:file:`/datasets`
1622
Long term storage of project-approved shared data. Contains immutable data. This is under a disaster recovery policy that every 30 days it is stored and written to nearline tape.
1723

24+
1825
:file:`/home`
1926
Storage of individual-user data. This data is backed up on a daily basis and NCSA retains 30 days of those backups in a snapshot. It does have quotas on this file system for 1TB for each "directory," and a 1 million INODE quota.
2027

@@ -33,29 +40,36 @@ Filesystems - in GPFS (4.9PB of storage)
3340
Quotas
3441
======
3542

43+
44+
Quotas
45+
======
46+
3647
Your home directory is the default directory you are placed in when you log on. You should use this space for storing files you want to keep long term such as source code, scripts, etc. Every user has a 1TB home directory quota (total space) and 1 million INODE quota (total number of files).
3748

38-
On **TIME**, quotas were enforced. The soft limit is 1TB and the hard limit is 1.2 TB. The INODE soft quota is 1 million files and the hard limit is 1.2 million files. If the amount of data in your home directory is over the soft limit but under the hard limit, there is a grace period of 7 days to get under the soft limit. When the grace period expires, you will not be able to write new files or update any current files until you reduce the amount of data to below the soft limit.
49+
On 6/17/2018, quotas were enforced. The soft limit is 1TB and the hard limit is 1.2 TB. The INODE soft quota is 1 million files and the hard limit is 1.2 million files. If the amount of data in your home directory is over the soft limit but under the hard limit, there is a grace period of 7 days to get under the soft limit. When the grace period expires, you will not be able to write new files or update any current files until you reduce the amount of data to below the soft limit.
50+
3951

4052
The command to see your disk usage and limits is quota. Example:
4153

4254
.. code-block:: text
4355
44-
[jdoe@golubh4 ~]$ quota
56+
[jdoe@systemname4 ~]$ quota
4557
Directories quota usage for user jdoe:
4658
4759
-------------------------------------------------------------------------------------
4860
| Fileset | Used | Soft | Hard | Used | Soft | Hard |
4961
| | Block | Quota | Limit | File | Quota | Limit |
5062
--------------------------------------------------------------------------------------
51-
| home | 501.1M | 2G | 4G | 14 | 0 | 0 |
52-
| cse-shared | 0 | 1.465T | 1.953T | 1 | 0 | 0 |
63+
64+
| home | 501.1M | 2G | 4G | 14 | 0 | 0 |
65+
| stuff | 0 | 1.465T | 1.953T | 1 | 0 | 0 |
5366
-------------------------------------------------------------------------------------
5467
5568
Home directories are backed up using snapshots and a separate DR process.
5669

57-
Data Compression
58-
================
70+
For space utiziation: Data Compression
71+
To reduce space usage in your home directory, an option for files that are not in active use is to compress them. The gzip utility can be used for file compression and decompression. Another alternative is bzip2, which usually yields a better compression ratio than gzip but takes longer to complete. Additionally, files that are typically used together can first be combined into a single file and then compressed using the tar utility.
72+
5973

6074
To reduce space usage in your home directory, an option for files that are not in active use is to compress them. The :command:`gzip` utility can be used for file compression and decompression. Another alternative is :command:`bzip2`, which usually yields a better compression ratio than gzip but takes longer to complete. Additionally, files that are typically used together can first be combined into a single file and then compressed using the tar utility.
6175

@@ -74,6 +88,7 @@ To decompress the file:
7488

7589
.. code-block:: bash
7690
91+
7792
gunzip largefile.dat.gz
7893
7994
Alternatively:
@@ -100,13 +115,9 @@ The convention is to use extension ``.tgz`` in the file name.
100115
101116
To extract the contents of the compressed tar file:
102117

103-
.. code-block:: bash
104-
105-
tar -xvf largedir.tgz
106118

107-
See the manual pages (``man gzip``, ``man bzip2``, ``man tar``) for more details on these utilities.
119+
**Notes:**
108120

109-
.. note::
110121

111122
ASCII text and binary files like executables can yield good compression ratios. Image file formats (gif, jpg, png, etc.) are already natively compressed so further compression will not yield much gains.
112123
Depending on the size of the files, the compression utilities can be compute intensive and take a while to complete. Use the compute nodes via a batch job for compressing large files.

stack/building-pipelines-lsst-io-locally.rst

Lines changed: 0 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -34,17 +34,6 @@ Clone the repository:
3434
3535
git clone https://github.com/lsst/pipelines_lsst_io
3636
37-
.. important::
38-
39-
During this initial phase, ``tickets/DM-11216`` is the integration branch for multi-package builds of `pipelines_lsst_io`_.
40-
For now you need to check out that branch:
41-
42-
.. code-block:: bash
43-
44-
cd pipelines_lsst_io
45-
git checkout tickets/DM-11216
46-
cd ..
47-
4837
Then set up the `pipelines_lsst_io`_ package with EUPS:
4938

5039
.. code:: bash

0 commit comments

Comments
 (0)