Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 5 additions & 6 deletions .github/workflows/build_and_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ on:
workflow_dispatch:

jobs:
build:
build_and_test:
strategy:
fail-fast: false

Expand All @@ -32,13 +32,12 @@ jobs:
run: |
python -m pip install --upgrade pip
pip install setuptools setuptools_scm wheel
pip install numpy
pip install -e .
pip install -e .[tests]

- name: Check package versions
run: |
pip show -V pymatgen
pip show -V pymatgen monty
pip show -V pytest

- name: Test
Expand All @@ -51,15 +50,15 @@ jobs:
# pytest --mpl --mpl-generate-summary=html tests/test_plotter.py

- name: Generate GH Actions test plots
if: always() # always generate the plots, even if the tests fail
if: failure() && steps.plotting_tests.outcome == 'failure' # Run only if plotting tests fail
run: |
# Generate the test plots in case there were any failures:
# Generate the test plots if there were any failures:
pytest --mpl-generate-path=tests/remote_baseline_plots tests/test_plotter.py

# Upload test plots
- name: Archive test plots
if: always()
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: output-plots
path: tests/remote_baseline_plots
2 changes: 1 addition & 1 deletion .github/workflows/pip_install_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ on:
- completed # only test when new release has been deployed to PyPI

jobs:
build:
install_and_test:
if: ${{ github.event.workflow_run.conclusion == 'success' }}

strategy:
Expand Down
7 changes: 7 additions & 0 deletions CHANGELOG.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,13 @@
Change log
==========

v2.3.1
~~~~~~
- Update ``ISMEAR`` handling (allows ``ISMEAR = -5`` results to be parsed and plotted)
- Plotting updates (`transition_cutoff`, x-max handling)
- Cleanup of GitHub Actions (update to supported workflow versions).
- Some code cleanup

v2.3.0
~~~~~~
- Updates to plotting (cleaner plots and titles to include scientific notation)
Expand Down
4 changes: 2 additions & 2 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
author = "Savyasanchi Aggarwal"

# The full version, including alpha/beta/rc tags
release = "2.3.0"
release = "2.3.1"

# -- General configuration ---------------------------------------------------

Expand Down Expand Up @@ -81,7 +81,7 @@
"launch_buttons": {
"binderhub_url": "https://mybinder.org",
"colab_url": "https://colab.research.google.com",
},
},
}

html_logo = "_static/PyTASER.png"
Expand Down
19 changes: 17 additions & 2 deletions pytaser/das_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,19 @@ def from_vasp_outputs(
Create a DASGenerator object from VASP output files.

The user should provide the vasprun files for the new system and the reference system,
followed by the waveder files for the new system and the reference system.
followed by the WAVEDER files for the new system and the reference system.

Note that by default, the `ISMEAR` smearing method from the `VASP` calculations
are used by `PyTASER` when generating the DAS spectra. If `ISMEAR` < -1 (e.g.
tetrahedron smearing), then this is set to `ISMEAR` = 0 (Gaussian smearing) as
tetrahedron smearing is not supported by the `pymatgen` optics module used in
these functions.

The smearing width (equivalent to `SIGMA` in VASP) is controlled by the
``gaussian_width`` parameter in the `DASGenerator.generate_das()` function,
which is 0.1 eV by default, regardless of the value used in the underlying
`VASP` calculations. ``cshift`` and ``gaussian_width`` are the dominant factors
in determining the broadening of the output spectra.

Args:
vasprun_file_new_system: The vasprun.xml file for the new system.
Expand Down Expand Up @@ -128,11 +140,14 @@ def generate_das(
of oscillator strengths. Otherwise, the output DAS is generated considering all contributions
to the predicted DAS spectrum.

``cshift`` and ``gaussian_width`` are the dominant factors in determining
the broadening of the output spectra.

Args:
temp: Temperature (K) of material we wish to investigate (affects the FD distribution)
energy_min: Minimum band transition energy to consider for energy mesh (eV)
energy_max: Maximum band transition energy to consider for energy mesh (eV)
gaussian_width: Width of gaussian curve
gaussian_width: Gaussian smearing width. Default is 0.1 eV.
cshift: Complex shift in the Kramers-Kronig transformation of the dielectric function
(see https://www.vasp.at/wiki/index.php/CSHIFT). If not set, uses the value of
CSHIFT from the underlying VASP WAVEDER calculation. (only relevant if the
Expand Down
28 changes: 23 additions & 5 deletions pytaser/generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,9 @@ def _calculate_oscillator_strength(args):
sigma=sigma,
nx=nedos,
dx=deltae,
ismear=ismear,
# use Gaussian smearing if ISMEAR = -5 calculation input (as this function only supports ISMEAR
# >= -1, but results essentially equivalent):
ismear=ismear if ismear >= -1 else 0,
)

absorption = smeared_wout_matrix_el * abs_matrix_el
Expand Down Expand Up @@ -270,7 +272,8 @@ def occ_dependent_alpha(
spin: Which spin channel to include.
sigma: Smearing width (in eV) for broadening of the dielectric function (see
https://www.vasp.at/wiki/index.php/SIGMA). If not set, uses the value of SIGMA from the
underlying VASP WAVEDER calculation.
underlying VASP WAVEDER calculation. Note that the default in ``TASGenerator.generate_tas()``
is to use a Gaussian width of 0.1 eV, regardless of SIGMA in the VASP calculation.
cshift: Complex shift in the Kramers-Kronig transformation of the dielectric function (see
https://www.vasp.at/wiki/index.php/CSHIFT). If not set, uses the value of CSHIFT from
the underlying VASP WAVEDER calculation.
Expand Down Expand Up @@ -449,6 +452,18 @@ def from_vasp_outputs(cls, vasprun_file, waveder_file=None, bg=None):
"""
Create a TASGenerator object from VASP output files.

Note that by default, the `ISMEAR` smearing method from the `VASP` calculation
is used by `PyTASER` when generating the TAS spectra. If `ISMEAR` < -1 (e.g.
tetrahedron smearing), then this is set to `ISMEAR` = 0 (Gaussian smearing) as
tetrahedron smearing is not supported by the `pymatgen` optics module used in
these functions.

The smearing width (equivalent to `SIGMA` in VASP) is controlled by the
``gaussian_width`` parameter in the `TASGenerator.generate_tas()` function,
which is 0.1 eV by default, regardless of the value used in the underlying
`VASP` calculation. ``cshift`` and ``gaussian_width`` are the dominant factors
in determining the broadening of the output TAS spectra.

Args:
vasprun_file: Path to vasprun.xml file (to generate bandstructure object).
waveder_file: Path to WAVEDER file (to generate dielectric function calculator object,
Expand Down Expand Up @@ -570,13 +585,16 @@ def generate_tas(
the output TAS is generated considering all contributions to the
predicted TAS spectrum.

``cshift`` and ``gaussian_width`` are the dominant factors in determining
the broadening of the output TAS spectra.

Args:
temp: Temperature (K) of material we wish to investigate (affects the FD distribution)
conc: Carrier concentration (cm^-3) of holes and electrons (both are equivalent).
Inversely proportional to pump-probe time delay.
energy_min: Minimum band transition energy to consider for energy mesh (eV)
energy_max: Maximum band transition energy to consider for energy mesh (eV)
gaussian_width: Width of gaussian curve
gaussian_width: Gaussian smearing width. Default is 0.1 eV.
cshift: Complex shift in the Kramers-Kronig transformation of the dielectric function
(see https://www.vasp.at/wiki/index.php/CSHIFT). If not set, uses the value of
CSHIFT from the underlying VASP WAVEDER calculation. (only relevant if the
Expand Down Expand Up @@ -722,9 +740,9 @@ def generate_tas(

if self.bs.is_spin_polarized:
spin_str = "up" if spin == Spin.up else "down"
key = (new_i, new_f, spin_str)
key = (int(new_i), int(new_f), spin_str)
else:
key = (new_i, new_f)
key = (int(new_i), int(new_f))

jdos_dark_if[key] = jd_dark
jdos_light_if[key] = jd_light
Expand Down
28 changes: 18 additions & 10 deletions pytaser/plotter.py
Original file line number Diff line number Diff line change
Expand Up @@ -244,7 +244,7 @@ class generated by DASGenerator().generate_das(). If the TASGenerator
xmax_ind = np.abs(energy_mesh - xmin).argmin()
if xmax is not None:
xmin_ind = np.abs(energy_mesh - xmax).argmin()
if yaxis.lower() == "das" or yaxis.lower() == "jdos_das":
if yaxis.lower() in ["das", "jdos_das"]:
bg = self.bandgap_ref_lambda
bg_ref = self.bandgap_ref_lambda
bg_new_sys = self.bandgap_new_sys_lambda
Expand All @@ -258,7 +258,7 @@ class generated by DASGenerator().generate_das(). If the TASGenerator
xmin_ind = np.abs(energy_mesh - xmin).argmin()
if xmax is not None:
xmax_ind = np.abs(energy_mesh - xmax).argmin()
if yaxis.lower() == "das" or yaxis.lower() == "jdos_das":
if yaxis.lower() in ["das", "jdos_das"]:
bg = self.bandgap_ref
bg_ref = self.bandgap_ref
bg_new_sys = self.bandgap_new_sys
Expand Down Expand Up @@ -291,6 +291,8 @@ class generated by DASGenerator().generate_das(). If the TASGenerator
)

def _rescale_overlapping_curves(list_of_curves):
if not [curve for curve in list_of_curves if curve is not None]:
return list_of_curves
local_extrema_coords = []
output_list_of_curves = []
# get max value of all curves to use as relative scaling factor:
Expand Down Expand Up @@ -404,7 +406,7 @@ def _rescale_overlapping_curves(list_of_curves):
plt.plot(
energy_mesh[xmin_ind:xmax_ind],
list_of_curves[i] / weighted_jdos_normalisation_factor,
label=str(transition) + " (light)",
label=f"{transition!s} (light)",
color=f"C{2 * i}",
lw=2.5,
)
Expand Down Expand Up @@ -439,7 +441,7 @@ def _rescale_overlapping_curves(list_of_curves):
plt.plot(
energy_mesh[xmin_ind:xmax_ind],
list_of_curves[i] / weighted_jdos_normalisation_factor,
label=str(transition) + " (light)",
label=f"{transition!s} (light)",
lw=2.5,
color=f"C{2 * i}",
)
Expand Down Expand Up @@ -566,7 +568,7 @@ def _rescale_overlapping_curves(list_of_curves):
plt.plot(
energy_mesh[xmin_ind:xmax_ind],
list_of_curves[i],
label=str(transition) + " (light)",
label=f"{transition!s} (light)",
color=f"C{2 * i}",
lw=2.5,
)
Expand Down Expand Up @@ -598,7 +600,7 @@ def _rescale_overlapping_curves(list_of_curves):
plt.plot(
energy_mesh[xmin_ind:xmax_ind],
list_of_curves[i],
label=str(transition) + " (light)",
label=f"{transition!s} (light)",
lw=2.5,
color=f"C{2 * i}",
)
Expand Down Expand Up @@ -693,7 +695,7 @@ def _rescale_overlapping_curves(list_of_curves):
if ymin is None:
ymin = y_axis_min

if yaxis.lower() == "das" or yaxis.lower() == "jdos_das":
if yaxis.lower() in ["das", "jdos_das"]:
if bg_ref is not None and bg_new_sys is not None:
y_bg = np.linspace(ymin, ymax)
x_bg_ref = np.empty(len(y_bg), dtype=float)
Expand All @@ -703,14 +705,14 @@ def _rescale_overlapping_curves(list_of_curves):
plt.plot(
x_bg_new_sys,
y_bg,
label=label_name + " Bandgap",
label=f"{label_name} Bandgap",
color="red",
ls="--",
)
plt.plot(
x_bg_ref,
y_bg,
label=labe_name_ref + " Bandgap",
label=f"{labe_name_ref} Bandgap",
color="blue",
ls="--",
)
Expand Down Expand Up @@ -757,10 +759,16 @@ def _rescale_overlapping_curves(list_of_curves):
# Set x limit to 95% of min x-value
xmin = min_x_for_y_gt_0 * 0.95

if xmin is not None:
xmin = max(xmin, 0)
if xmax is not None:
xmax = min(xmax, max(energy_mesh))
xmax = min(xmax, 5000) # limit xmax to 5000 nm (5 µm) to avoid plotting issues

plt.xlim(xmin, xmax)
plt.ylim(ymin, ymax)

if yaxis.lower() == "das" or yaxis.lower() == "jdos_das":
if yaxis.lower() in ["das", "jdos_das"]:
if self.material_name is not None:
# add $_X$ around each digit X in self.material_name, to give formatted chemical formula
formatted_material_name = re.sub(r"(\d)", r"$_{\1}$", self.material_name)
Expand Down
47 changes: 20 additions & 27 deletions pytaser/tas.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,21 +9,6 @@
from monty.json import MontyDecoder


def convert_to_tuple(subdict):
"""
Converts subdict representation to tuple.

Args:
subdict: dict,

Returns:
subdict: tuple
"""
if isinstance(subdict, dict) and "@module" not in subdict:
return {ast.literal_eval(k) if "(" in k and ")" in k else k: v for k, v in subdict.items()}
return subdict


def decode_dict(subdict):
"""
Decode subdict from a dict representation using MontyDecoder.
Expand All @@ -37,9 +22,19 @@ def decode_dict(subdict):
if isinstance(subdict, dict):
if "@module" in subdict:
return MontyDecoder().process_decoded(subdict)
for k, v in subdict.items():

for k in list(subdict.keys()):
v = subdict.pop(k)
key = ast.literal_eval(k) if k.startswith("(") and k.endswith(")") else k

# Note that future code updates could avoid the use of tuples as keys, to avoid the
# additional handling / conversion steps here

if isinstance(v, dict) and "@module" in v:
subdict[k] = MontyDecoder().process_decoded(v)
v = MontyDecoder().process_decoded(v)

subdict[key] = v

return subdict


Expand Down Expand Up @@ -163,10 +158,10 @@ def as_dict(self):
"weighted_jdos_diff_if": self.weighted_jdos_diff_if,
}
for key, value in json_dict.items():
if isinstance(value, dict):
if isinstance(value, dict): # likely transitions dictionary, convert tuples to strings
json_dict[key] = {
str(k): v for k, v in value.items()
} # decomp dicts, can't have tuples as keys
str((int(k[0]), int(k[1]))) if isinstance(k, tuple) else k: v for k, v in value.items()
} # decomp dicts, can't have tuples as keys, so convert to str of tuple of integers
return json_dict

@classmethod
Expand All @@ -181,8 +176,7 @@ def from_dict(cls, d):
Returns:
Tas object
"""
d_dec = {k: convert_to_tuple(v) for k, v in d.items()}
d_decoded = {k: decode_dict(v) for k, v in d_dec.items()}
d_decoded = {k: decode_dict(v) for k, v in d.items()}

for monty_key in ["@module", "@class"]:
if monty_key in d_decoded:
Expand Down Expand Up @@ -303,10 +297,10 @@ def as_dict(self):
"weighted_jdos_ref_if": self.weighted_jdos_ref_if,
}
for key, value in json_dict.items():
if isinstance(value, dict):
if isinstance(value, dict): # likely transitions dictionary, convert tuples to strings
json_dict[key] = {
str(k): v for k, v in value.items()
} # decomp dicts, can't have tuples as keys
str((int(k[0]), int(k[1]))) if isinstance(k, tuple) else k: v for k, v in value.items()
} # decomp dicts, can't have tuples as keys, so convert to str of tuple of integers
return json_dict

@classmethod
Expand All @@ -321,7 +315,6 @@ def from_dict(cls, d):
Returns:
Das object
"""
d_dec = {k: convert_to_tuple(v) for k, v in d.items()}
d_decoded = {k: decode_dict(v) for k, v in d_dec.items()}
d_decoded = {k: decode_dict(v) for k, v in d.items()}

return cls(**d_decoded)
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ pytest>=7.1.3
pytest-mpl
numpy
monty
pymatgen>=2024.1.27
pymatgen>=2023.05.31
matplotlib>=3.7.1
scipy
pathlib
Loading
Loading