The GSObject base class

This class defines most of the public API methods for how to use one of the various surface brightness profiles like transforming it, drawing it, etc.

Note that not all methods are allowed to be called for all subclasses. For instance, some classes only define the profile in Fourier space, so methods which need to access the profile in real space may not be implemented. In such cases, a NotImplementedError will be raised.

class galsim.GSObject[source]

Base class for all GalSim classes that represent some kind of surface brightness profile.

A GSObject is not intended to be constructed directly. Normally, you would use whatever derived class is appropriate for the surface brightness profile you want:

>>> gal = galsim.Sersic(n=4, half_light_radius=4.3)
>>> psf = galsim.Moffat(beta=3, fwhm=2.85)
>>> conv = galsim.Convolve([gal,psf])

All of these classes are subclasses of GSObject, so you should see those docstrings for more details about how to construct the various profiles. Here we discuss attributes and methods that are common to all GSObjects.

GSObjects are always defined in sky coordinates. So all sizes and other linear dimensions should be in terms of some kind of units on the sky, arcsec for instance. Only later (when they are drawn) is the connection to pixel coordinates established via a pixel scale or WCS. (See the documentation for galsim.BaseWCS for more details about how to specify various kinds of world coordinate systems more complicated than a simple pixel scale.)

For instance, if you eventually draw onto an image that has a pixel scale of 0.2 arcsec/pixel, then the normal thing to do would be to define your surface brightness profiles in terms of arcsec and then draw with pixel_scale=0.2. However, while arcsec are the usual choice of units for the sky coordinates, if you wanted, you could instead define the sizes of all your galaxies and PSFs in terms of radians and then use pixel_scale=0.2/206265 when you draw them.

Transforming methods:

The GSObject class uses an “immutable” design[1], so all methods that would potentially modify the object actually return a new object instead. This uses pointers and such behind the scenes, so it all happens efficiently, but it makes using the objects a bit simpler, since you don’t need to worry about some function changing your object behind your back.

In all cases below, we just give an example usage. See the docstrings for the methods for more details about how to use them.:

>>> obj = obj.shear(shear)      # Apply a shear to the object.
>>> obj = obj.dilate(scale)     # Apply a flux-preserving dilation.
>>> obj = obj.magnify(mu)       # Apply a surface-brightness-preserving magnification.
>>> obj = obj.rotate(theta)     # Apply a rotation.
>>> obj = obj.shift(dx,dy)      # Shft the object in real space.
>>> obj = obj.transform(dudx,dudy,dvdx,dvdy)    # Apply a general jacobian transformation.
>>> obj = obj.lens(g1,g2,mu)    # Apply both a lensing shear and magnification.
>>> obj = obj.withFlux(flux)    # Set a new flux value.
>>> obj = obj * ratio           # Scale the surface brightness profile by some factor.

Access Methods:

There are some access methods and properties that are available for all GSObjects. Again, see the docstrings for each method for more details.:

>>> obj.flux
>>> obj.centroid
>>> obj.nyquist_scale
>>> obj.stepk
>>> obj.maxk
>>> obj.has_hard_edges
>>> obj.is_axisymmetric
>>> obj.is_analytic_x
>>> obj.is_analytic_k
>>> obj.xValue(x,y) or obj.xValue(pos)
>>> obj.kValue(kx,ky) os obj.kValue(kpos)

Most subclasses have additional methods that are available for values that are particular to that specific surface brightness profile. e.g. sigma = gauss.sigma. However, note that class-specific methods are not available after performing one of the above transforming operations.:

>>> gal = galsim.Gaussian(sigma=5)
>>> gal = gal.shear(g1=0.2, g2=0.05)
>>> sigma = gal.sigma               # This will raise an exception.

It is however possible to access the original object that was transformed via the original attribute.:

>>> sigma = gal.original.sigma      # This works.

No matter how many transformations are performed, the original attribute will contain the _original_ object (not necessarily the most recent ancestor).

Drawing Methods:

The main thing to do with a GSObject once you have built it is to draw it onto an image. There are two methods that do this. In both cases, there are lots of optional parameters. See the docstrings for these methods for more details.:

>>> image = obj.drawImage(...)
>>> kimage = obj.drawKImage(...)

There two attributes that may be available for a GSObject.

Attributes:
  • original – This was mentioned above as a way to access the original object that has been transformed by one of the transforming methods.

  • noise – Some types, like RealGalaxy, set this attribute to be the intrinsic noise that is already inherent in the profile and will thus be present when you draw the object. The noise is propagated correctly through the various transforming methods, as well as convolutions and flux rescalings. Note that the noise attribute can be set directly by users even for GSObjects that do not naturally have one. The typical use for this attribute is to use it to whiten the noise in the image after drawing. See BaseCorrelatedNoise for more details.

GSParams:

All GSObject classes take an optional gsparams argument, so we document that feature here. For all documentation about the specific derived classes, please see the docstring for each one individually.

The gsparams argument can be used to specify various numbers that govern the tradeoff between accuracy and speed for the calculations made in drawing a GSObject. The numbers are encapsulated in a class called GSParams, and the user should make careful choices whenever they opt to deviate from the defaults. For more details about the parameters and their default values, please see the docstring of the GSParams class.

For example, let’s say you want to do something that requires an FFT larger than 4096 x 4096 (and you have enough memory to handle it!). Then you can create a new GSParams object with a larger maximum_fft_size and pass that to your GSObject on construction:

>>> gal = galsim.Sersic(n=4, half_light_radius=4.3)
>>> psf = galsim.Moffat(beta=3, fwhm=2.85)
>>> conv = galsim.Convolve([gal,psf])
>>> im = galsim.Image(1000,1000, scale=0.02)        # Note the very small pixel scale!
>>> im = conv.drawImage(image=im)                   # This uses the default GSParams.
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "galsim/gsobject.py", line 1666, in drawImage
    added_photons = prof.drawFFT(draw_image, add)
  File "galsim/gsobject.py", line 1877, in drawFFT
    kimage, wrap_size = self.drawFFT_makeKImage(image)
  File "galsim/gsobject.py", line 1802, in drawFFT_makeKImage
    raise GalSimFFTSizeError("drawFFT requires an FFT that is too large.", Nk)
galsim.errors.GalSimFFTSizeError: drawFFT requires an FFT that is too large.
The required FFT size would be 12288 x 12288, which requires 3.38 GB of memory.
If you can handle the large FFT, you may update gsparams.maximum_fft_size.
>>> big_fft_params = galsim.GSParams(maximum_fft_size=12300)
>>> conv = galsim.Convolve([gal,psf],gsparams=big_fft_params)
>>> im = conv.drawImage(image=im)                   # Now it works (but is slow!)
>>> im.write('high_res_sersic.fits')

Note that for compound objects such as Convolution or Sum, not all GSParams can be changed when the compound object is created. In the example given here, it is possible to change parameters related to the drawing, but not the Fourier space parameters for the components that go into the Convolution. To get better sampling in Fourier space, for example, the gal and/or psf should be created with gsparams that have a non-default value of folding_threshold. This statement applies to the threshold and accuracy parameters.

__add__(other)[source]

Add two GSObjects.

Equivalent to Add(self, other)

__sub__(other)[source]

Subtract two GSObjects.

Equivalent to Add(self, -1 * other)

__mul__(other)[source]

Scale the flux of the object by the given factor.

obj * flux_ratio is equivalent to obj.withScaledFlux(flux_ratio)

It creates a new object that has the same profile as the original, but with the surface brightness at every location scaled by the given amount.

You can also multiply by an SED, which will create a ChromaticObject where the SED acts like a wavelength-dependent flux_ratio.

__rmul__(other)[source]

Equivalent to obj * other. See __mul__ for details.

__div__(other)[source]

Equivalent to obj * (1/other). See __mul__ for details.

_xValue(pos)[source]

Equivalent to xValue, but pos must be a galsim.PositionD instance

Parameters:

pos – The position at which you want the surface brightness of the object.

Returns:

the surface brightness at that position.

_kValue(kpos)[source]

Equivalent to kValue, but kpos must be a galsim.PositionD instance.

_shear(shear)[source]

Equivalent to GSObject.shear, but without the overhead of sanity checks or other ways to input the shear value.

Also, it won’t propagate any noise attribute.

Parameters:

shear – The Shear to be applied.

Returns:

the sheared object.

_shift(dx, dy)[source]

Equivalent to shift, but without the overhead of sanity checks or option to give the shift as a PositionD.

Also, it won’t propagate any noise attribute.

Parameters:
  • dx – The x-component of the shift to apply

  • dy – The y-component of the shift to apply

Returns:

the shifted object.

_drawReal(image, jac=None, offset=(0.0, 0.0), flux_scaling=1.0)[source]

A version of drawReal without the sanity checks or some options.

This is nearly equivalent to the regular drawReal(image, add_to_image=False), but the image’s dtype must be either float32 or float64, and it must have a c_contiguous array (image.iscontiguous must be True).

_calculate_nphotons(n_photons, poisson_flux, max_extra_noise, rng)[source]

Calculate how many photons to shoot and what flux_ratio (called g) each one should have in order to produce an image with the right S/N and total flux.

This routine is normally called by drawPhot.

Returns:

n_photons, g

_shoot(photons, rng)[source]

Shoot photons into the given PhotonArray.

This is the backend implementation of shoot once the PhotonArray has been constructed.

Parameters:
  • photons – A PhotonArray instance into which the photons should be placed.

  • rng – A BaseDeviate instance to use for the photon shooting,

_drawKImage(image, jac=None)[source]

A version of drawKImage without the sanity checks or some options.

Equivalent to drawKImage(image, add_to_image=False, recenter=False, add_to_image=False), but without the option to create the image automatically.

The input image must be provided as a complex Image instance (dtype=complex64 or complex128), and the bounds should be set up appropriately (e.g. with 0,0 in the center if so desired). This corresponds to recenter=False for the normal drawKImage. And, it must have a c_contiguous array (image.iscontiguous must be True).

Parameters:

image – The Image onto which to draw the k-space image. [required]

applyTo(photon_array, local_wcs=None, rng=None)[source]

Apply this surface brightness profile as a convolution to an existing photon array.

This method allows a GSObject to duck type as a PhotonOp, so one can apply a PSF in a photon_ops list.

Parameters:
  • photon_array – A PhotonArray to apply the operator to.

  • local_wcs – A LocalWCS instance defining the local WCS for the current photon bundle in case the operator needs this information. [default: None]

  • rng – A random number generator to use to effect the convolution. [default: None]

atRedshift(redshift)[source]

Create a version of the current object with a different redshift.

For regular GSObjects, this method doesn’t do anything aside from setting a redshift attribute with the given value. But this allows duck typing with ChromaticObjects where this function will adjust the SED appropriately.

Returns:

the object with the new redshift

calculateFWHM(size=None, scale=None, centroid=None)[source]

Returns the full-width half-maximum (FWHM) of the object.

If the profile has a fwhm attribute, it will just return that, but in the general case, we draw the profile and estimate the FWHM directly.

As with calculateHLR and calculateMomentRadius, this function optionally takes size and scale values to use for the image drawing. The default is to use the the Nyquist scale for the pixel scale and let drawImage choose a size for the stamp that will enclose at least 99.5% of the flux. These were found to produce results accurate to well below one percent on our internal tests, so it is unlikely that you will want to adjust them for accuracy. However, using a smaller size than default could help speed up the calculation, since the default is usually much larger than is needed.

Parameters:
  • size – If given, the stamp size to use for the drawn image. [default: None, which will let drawImage choose the size automatically]

  • scale – If given, the pixel scale to use for the drawn image. [default: self.nyquist_scale]

  • centroid – The position to use for the centroid. [default: self.centroid]

Returns:

an estimate of the full-width half-maximum in physical units

calculateHLR(size=None, scale=None, centroid=None, flux_frac=0.5)[source]

Returns the half-light radius of the object.

If the profile has a half_light_radius attribute, it will just return that, but in the general case, we draw the profile and estimate the half-light radius directly.

This function (by default at least) is only accurate to a few percent, typically. Possibly worse depending on the profile being measured. If you care about a high precision estimate of the half-light radius, the accuracy can be improved using the optional parameter scale to change the pixel scale used to draw the profile.

The default scale is half the Nyquist scale, which were found to produce results accurate to a few percent on our internal tests. Using a smaller scale will be more accurate at the expense of speed.

In addition, you can optionally specify the size of the image to draw. The default size is None, which means drawImage will choose a size designed to contain around 99.5% of the flux. This is overkill for this calculation, so choosing a smaller size than this may speed up this calculation somewhat.

Also, while the name of this function refers to the half-light radius, in fact it can also calculate radii that enclose other fractions of the light, according to the parameter flux_frac. E.g. for r90, you would set flux_frac=0.90.

The default scale should usually be acceptable for things like testing that a galaxy has a reasonable resolution, but they should not be trusted for very fine grain discriminations.

Parameters:
  • size – If given, the stamp size to use for the drawn image. [default: None, which will let drawImage choose the size automatically]

  • scale – If given, the pixel scale to use for the drawn image. [default: 0.5 * self.nyquist_scale]

  • centroid – The position to use for the centroid. [default: self.centroid]

  • flux_frac – The fraction of light to be enclosed by the returned radius. [default: 0.5]

Returns:

an estimate of the half-light radius in physical units

calculateMomentRadius(size=None, scale=None, centroid=None, rtype='det')[source]

Returns an estimate of the radius based on unweighted second moments.

The second moments are defined as:

Q_ij = int( I(x,y) i j dx dy ) / int( I(x,y) dx dy ) where i,j may be either x or y.

If I(x,y) is a Gaussian, then T = Tr(Q) = Qxx + Qyy = 2 sigma^2. Thus, one reasonable choice for a “radius” for an arbitrary profile is sqrt(T/2).

In addition, det(Q) = sigma^4. So another choice for an arbitrary profile is det(Q)^1/4.

This routine can return either of these measures according to the value of the rtype parameter. rtype='trace' will cause it to return sqrt(T/2). rtype='det' will cause it to return det(Q)^1/4. And rtype='both' will return a tuple with both values.

Note that for the special case of a Gaussian profile, no calculation is necessary, and the sigma attribute will be used in both cases. In the limit as scale->0, this function will return the same value, but because finite pixels are drawn, the results will not be precisely equal for real use cases. The approximation being made is that the integral of I(x,y) i j dx dy over each pixel can be approximated as int(I(x,y) dx dy) * i_center * j_center.

This function (by default at least) is only accurate to a few percent, typically. Possibly worse depending on the profile being measured. If you care about a high precision estimate of the radius, the accuracy can be improved using the optional parameters size and scale to change the size and pixel scale used to draw the profile.

The default is to use the the Nyquist scale for the pixel scale and let drawImage choose a size for the stamp that will enclose at least 99.5% of the flux. These were found to produce results accurate to a few percent on our internal tests. Using a smaller scale and larger size will be more accurate at the expense of speed.

The default parameters should usually be acceptable for things like testing that a galaxy has a reasonable resolution, but they should not be trusted for very fine grain discriminations. For a more accurate estimate, see galsim.hsm.FindAdaptiveMom.

Parameters:
  • size – If given, the stamp size to use for the drawn image. [default: None, which will let drawImage choose the size automatically]

  • scale – If given, the pixel scale to use for the drawn image. [default: self.nyquist_scale]

  • centroid – The position to use for the centroid. [default: self.centroid]

  • rtype – There are three options for this parameter: - ‘trace’ means return sqrt(T/2) - ‘det’ means return det(Q)^1/4 - ‘both’ means return both: (sqrt(T/2), det(Q)^1/4) [default: ‘det’]

Returns:

an estimate of the radius in physical units (or both estimates if rtype == ‘both’)

property centroid

The (x, y) centroid of an object as a PositionD.

dilate(scale)[source]

Dilate the linear size of the profile by the given scale factor, while preserving flux.

e.g. half_light_radius <– half_light_radius * scale

See expand() and magnify() for versions that preserve surface brightness, and thus changes the flux.

Parameters:

scale – The linear rescaling factor to apply.

Returns:

the dilated object.

drawFFT(image, add_to_image=False)[source]

Draw this profile into an Image by computing the k-space image and performing an FFT.

This is usually called from the drawImage function, rather than called directly by the user. In particular, the input image must be already set up with defined bounds. The profile will be drawn centered on whatever pixel corresponds to (0,0) with the given bounds, not the image center (unlike drawImage). The image also must have a PixelScale wcs. The profile being drawn should have already been converted to image coordinates via:

>>> image_profile = original_wcs.toImage(original_profile)

Note that the Image produced by drawFFT represents the profile sampled at the center of each pixel and then multiplied by the pixel area. That is, the profile is NOT integrated over the area of the pixel. This is equivalent to method=’no_pixel’ in drawImage. If you want to render a profile integrated over the pixel, you can convolve with a Pixel first and draw that.

Parameters:
  • image – The Image onto which to place the flux. [required]

  • add_to_image – Whether to add flux to the existing image rather than clear out anything in the image before drawing. [default: False]

Returns:

The total flux drawn inside the image bounds.

drawFFT_finish(image, kimage, wrap_size, add_to_image)[source]

This is a helper routine for drawFFT that finishes the calculation, based on the drawn k-space image.

It applies the Fourier transform to kimage and adds the result to image.

Parameters:
  • image – The Image onto which to place the flux.

  • kimage – The k-space Image where the object was drawn.

  • wrap_size – The size of the region to wrap kimage, which must be either the same size as kimage or smaller.

  • add_to_image – Whether to add flux to the existing image rather than clear out anything in the image before drawing.

Returns:

The total flux drawn inside the image bounds.

drawFFT_makeKImage(image)[source]

This is a helper routine for drawFFT that just makes the (blank) k-space image onto which the profile will be drawn. This can be useful if you want to break up the calculation into parts for extra efficiency. E.g. save the k-space image of the PSF so drawing many models of the galaxy with the given PSF profile can avoid drawing the PSF each time.

Parameters:

image – The Image onto which to place the flux.

Returns:

(kimage, wrap_size), where wrap_size is either the size of kimage or smaller if the result should be wrapped before doing the inverse fft.

drawImage(image=None, nx=None, ny=None, bounds=None, scale=None, wcs=None, dtype=None, method='auto', area=1.0, exptime=1.0, gain=1.0, add_to_image=False, center=None, use_true_center=True, offset=None, n_photons=0.0, rng=None, max_extra_noise=0.0, poisson_flux=None, sensor=None, photon_ops=None, n_subsample=3, maxN=None, save_photons=False, bandpass=None, setup_only=False, surface_ops=None)[source]

Draws an Image of the object.

The drawImage() method is used to draw an Image of the current object using one of several possible rendering methods (see below). It can create a new Image or can draw onto an existing one if provided by the image parameter. If the image is given, you can also optionally add to the given Image if add_to_image = True, but the default is to replace the current contents with new values.

Providing an input image:

Note that if you provide an image parameter, it is the image onto which the profile will be drawn. The provided image will be modified. A reference to the same image is also returned to provide a parallel return behavior to when image is None (described above).

This option is useful in practice because you may want to construct the image first and then draw onto it, perhaps multiple times. For example, you might be drawing onto a subimage of a larger image. Or you may want to draw different components of a complex profile separately. In this case, the returned value is typically ignored. For example:

>>> im1 = bulge.drawImage()
>>> im2 = disk.drawImage(image=im1, add_to_image=True)
>>> assert im1 is im2

>>> full_image = galsim.Image(2048, 2048, scale=pixel_scale)
>>> b = galsim.BoundsI(x-32, x+32, y-32, y+32)
>>> stamp = obj.drawImage(image = full_image[b])
>>> assert (stamp.array == full_image[b].array).all()

Letting drawImage create the image for you:

If drawImage() will be creating the image from scratch for you, then there are several ways to control the size of the new image. If the nx and ny keywords are present, then an image with these numbers of pixels on a side will be created. Similarly, if the bounds keyword is present, then an image with the specified bounds will be created. Note that it is an error to provide an existing Image when also specifying nx, ny, or bounds. In the absence of nx, ny, and bounds, drawImage will decide a good size to use based on the size of the object being drawn. Basically, it will try to use an area large enough to include at least 99.5% of the flux.

Note

This value 0.995 is really 1 - folding_threshold. You can change the value of folding_threshold for any object via GSParams.

You can set the pixel scale of the constructed image with the scale parameter, or set a WCS function with wcs. If you do not provide either scale or wcs, then drawImage() will default to using the Nyquist scale for the current object.

You can also set the data type used in the new Image with the dtype parameter that has the same options as for the Image constructor.

The drawing “method”:

There are several different possible methods drawImage() can use for rendering the image. This is set by the method parameter. The options are:

auto

This is the default, which will normally be equivalent to ‘fft’. However, if the object being rendered is simple (no convolution) and has hard edges (e.g. a Box or a truncated Moffat or Sersic), then it will switch to ‘real_space’, since that is often both faster and more accurate in these cases (due to ringing in Fourier space).

fft

The integration of the light within each pixel is mathematically equivalent to convolving by the pixel profile (a Pixel object) and sampling the result at the centers of the pixels. This method will do that convolution using a discrete Fourier transform. Furthermore, if the object (or any component of it) has been transformed via shear(), dilate(), etc., then these transformations are done in Fourier space as well.

real_space

This uses direct integrals (using the Gauss-Kronrod-Patterson algorithm) in real space for the integration over the pixel response. It is usually slower than the ‘fft’ method, but if the profile has hard edges that cause ringing in Fourier space, it can be faster and/or more accurate. If you use ‘real_space’ with something that is already a Convolution, then this will revert to ‘fft’, since the double convolution that is required to also handle the pixel response is far too slow to be practical using real-space integrals.

phot

This uses a technique called photon shooting to render the image. Essentially, the object profile is taken as a probability distribution from which a finite number of photons are “shot” onto the image. Each photon’s flux gets added to whichever pixel the photon hits. This process automatically accounts for the integration of the light over the pixel area, since all photons that hit any part of the pixel are counted. Convolutions and transformations are simple geometric processes in this framework. However, there are two caveats with this method: (1) the resulting image will have Poisson noise from the finite number of photons, and (2) it is not available for all object types (notably anything that includes a Deconvolution).

no_pixel

Instead of integrating over the pixels, this method will sample the profile at the centers of the pixels and multiply by the pixel area. If there is a convolution involved, the choice of whether this will use an FFT or real-space calculation is governed by the real_space parameter of the Convolution class. This method is the appropriate choice if you are using a PSF that already includes a convolution by the pixel response. For example, if you are using a PSF from an observed image of a star, then it has already been convolved by the pixel, so you would not want to do so again. Note: The multiplication by the pixel area gets the flux normalization right for the above use case. cf. method = 'sb'.

sb

This is a lot like ‘no_pixel’, except that the image values will simply be the sampled object profile’s surface brightness, not multiplied by the pixel area. This does not correspond to any real observing scenario, but it could be useful if you want to view the surface brightness profile of an object directly, without including the pixel integration.

The ‘phot’ method has a few extra parameters that adjust how it functions. The total number of photons to shoot is normally calculated from the object’s flux. This flux is taken to be given in photons/cm^2/s, so for most simple profiles, this times area * exptime will equal the number of photons shot. (See the discussion in Rowe et al, 2015, for why this might be modified for InterpolatedImage and related profiles.) However, you can manually set a different number of photons with n_photons. You can also set max_extra_noise to tell drawImage() to use fewer photons than normal (and so is faster) such that no more than that much extra noise is added to any pixel. This is particularly useful if you will be subsequently adding sky noise, and you can thus tolerate more noise than the normal number of photons would give you, since using fewer photons is of course faster. Finally, the default behavior is to have the total flux vary as a Poisson random variate, which is normally appropriate with photon shooting. But you can turn this off with poisson_flux=False. It also defaults to False if you set an explicit value for n_photons.

Given the periodicity implicit in the use of FFTs, there can occasionally be artifacts due to wrapping at the edges, particularly for objects that are quite extended (e.g., due to the nature of the radial profile). See GSParams for parameters that you can use to reduce the level of these artifacts, in particular folding_threshold may be helpful if you see such artifacts in your images.

Setting the offset:

The object will by default be drawn with its nominal center at the center location of the image. There is thus a qualitative difference in the appearance of the rendered profile when drawn on even- and odd-sized images. For a profile with a maximum at (0,0), this maximum will fall in the central pixel of an odd-sized image, but in the corner of the four central pixels of an even-sized image. There are three parameters that can affect this behavior. First, you can specify any arbitrary pixel position to center the object using the center parameter. If this is None, then it will pick one of the two potential “centers” of the image, either image.true_center or image.center. The latter is an integer position, which always corresponds to the center of some pixel, which for even sized images won’t (cannot) be the actual “true” center of the image. You can choose which of these two centers you want to use with the use_true_center parameters, which defaults to False. You can also arbitrarily offset the profile from the image center with the offset parameter to handle any aribtrary offset you want from the chosen center. (Typically, one would use only one of center or offset but it is permissible to use both.)

Setting the overall normalization:

Normally, the flux of the object should be equal to the sum of all the pixel values in the image, less some small amount of flux that may fall off the edge of the image (assuming you don’t use method='sb'). However, you may optionally set a gain value, which converts between photons and ADU (so-called analog-to-digital units), the units of the pixel values in real images. Normally, the gain of a CCD is in electrons/ADU, but in GalSim, we fold the quantum efficiency into the gain as well, so the units are photons/ADU.

Another caveat is that, technically, flux is really in units of photons/cm^2/s, not photons. So if you want, you can keep track of this properly and provide an area and exptime here. This detail is more important with chromatic objects where the SED is typically given in erg/cm^2/s/nm, so the exposure time and area are important details. With achromatic objects however, it is often more convenient to ignore these details and just consider the flux to be the total number of photons for this exposure, in which case, you would leave the area and exptime parameters at their default value of 1.

On return, the image will have an attribute added_flux, which will be set to the total flux added to the image. This may be useful as a sanity check that you have provided a large enough image to catch most of the flux. For example:

>>> obj.drawImage(image)
>>> assert image.added_flux > 0.99 * obj.flux

The appropriate threshold will depend on your particular application, including what kind of profile the object has, how big your image is relative to the size of your object, whether you are keeping poisson_flux=True, etc.

The following code snippet illustrates how gain, exptime, area, and method can all influence the relationship between the flux attribute of a GSObject and both the pixel values and .added_flux attribute of an Image drawn with drawImage():

>>> obj = galsim.Gaussian(fwhm=1)
>>> obj.flux
1.0
>>> im = obj.drawImage()
>>> im.added_flux
0.9999630988657515
>>> im.array.sum()
0.99996305
>>> im = obj.drawImage(exptime=10, area=10)
>>> im.added_flux
0.9999630988657525
>>> im.array.sum()
99.996315
>>> im = obj.drawImage(exptime=10, area=10, method='sb', scale=0.5, nx=10, ny=10)
>>> im.added_flux
0.9999973790505298
>>> im.array.sum()
399.9989
>>> im = obj.drawImage(exptime=10, area=10, gain=2)
>>> im.added_flux
0.9999630988657525
>>> im.array.sum()
49.998158

Using a non-trivial sensor:

Normally the sensor is modeled as an array of pixels where any photon that hits a given pixel is accumulated into that pixel. The final pixel value then just reflects the total number of pixels that hit each sensor. However, real sensors do not (quite) work this way.

In real CCDs, the photons travel some distance into the silicon before converting to electrons. Then the electrons diffuse laterally some amount as they are pulled by the electric field toward the substrate. Finally, previous electrons that have already been deposited will repel subsequent electrons, both slowing down their descent, leading to more diffusion, and pushing them laterally toward neighboring pixels, which is called the brighter-fatter effect.

Users interested in modeling this kind of effect can supply a sensor object to use for the accumulation step. See SiliconSensor for a class that models silicon-based CCD sensors.

Some related effects may need to be done to the photons at the surface layer before being passed into the sensor object. For instance, the photons may need to be given appropriate incidence angles according to the optics of the telescope (since this matters for where the photons are converted to electrons). You may also need to give the photons wavelengths according to the SED of the object. Such steps are specified in a photon_ops parameter, which should be a list of any such operations you wish to perform on the photon array before passing them to the sensor. See FRatioAngles and WavelengthSampler for two examples of such photon operators.

Since the sensor deals with photons, it is most natural to use this feature in conjunction with photon shooting (method='phot'). However, it is allowed with FFT methods too. But there is a caveat one should be aware of in this case. The FFT drawing is used to produce an intermediate image, which is then converted to a PhotonArray using the factory function PhotonArray.makeFromImage. This assigns photon positions randomly within each pixel where they were drawn, which isn’t always a particularly good approximation.

To improve this behavior, the intermediate image is drawn with smaller pixels than the target image, so the photons are given positions closer to their true locations. The amount of subsampling is controlled by the n_subsample parameter, which defaults to 3. Larger values will be more accurate at the expense of larger FFTs (i.e. slower and using more memory).

Parameters:
  • image – If provided, this will be the image on which to draw the profile. If image is None, then an automatically-sized Image will be created. If image is given, but its bounds are undefined (e.g. if it was constructed with image = galsim.Image()), then it will be resized appropriately based on the profile’s size [default: None].

  • nx – If provided and image is None, use to set the x-direction size of the image. Must be accompanied by ny.

  • ny – If provided and image is None, use to set the y-direction size of the image. Must be accompanied by nx.

  • bounds – If provided and image is None, use to set the bounds of the image.

  • scale – If provided, use this as the pixel scale for the image. If scale is None and image is given, then take the provided image’s pixel scale. If scale is None and image is None, then use the Nyquist scale. If scale <= 0 (regardless of image), then use the Nyquist scale. If scale > 0 and image is given, then override image.scale with the value given as a keyword. [default: None]

  • wcs – If provided, use this as the wcs for the image (possibly overriding any existing image.wcs). At most one of scale or wcs may be provided. [default: None]

  • dtype – The data type to use for an automatically constructed image. Only valid if image is None. [default: None, which means to use numpy.float32]

  • method – Which method to use for rendering the image. See discussion above for the various options and what they do. [default: ‘auto’]

  • area – Collecting area of telescope in cm^2. [default: 1.]

  • exptime – Exposure time in s. [default: 1.]

  • gain – The number of photons per ADU (“analog to digital units”, the units of the numbers output from a CCD). [default: 1]

  • add_to_image – Whether to add flux to the existing image rather than clear out anything in the image before drawing. Note: This requires that image be provided and that it have defined bounds. [default: False]

  • center – The position on the image at which to place the nominal center of the object (usually the centroid, but not necessarily). [default: None]

  • use_true_center – If center is None, then the object is normally centered at the true center of the image (using the property image.true_center). If you would rather use the integer center (given by image.center), set this to False. [default: True]

  • offset – The offset in pixel coordinates at which to center the profile being drawn relative to either center (if given) or the center of the image (either the true center or integer center according to the use_true_center parameter). [default: None]

  • n_photons – If provided, the number of photons to use for photon shooting. If not provided (i.e. n_photons = 0), use as many photons as necessary to result in an image with the correct Poisson shot noise for the object’s flux. For positive definite profiles, this is equivalent to n_photons = flux. However, some profiles need more than this because some of the shot photons are negative (usually due to interpolants). [default: 0]

  • rng – If provided, a random number generator to use for photon shooting, which may be any kind of BaseDeviate object. If rng is None, one will be automatically created. [default: None]

  • max_extra_noise – If provided, the allowed extra noise in each pixel when photon shooting. This is only relevant if n_photons=0, so the number of photons is being automatically calculated. In that case, if the image noise is dominated by the sky background, then you can get away with using fewer shot photons than the full n_photons = flux. Essentially each shot photon can have a flux > 1, which increases the noise in each pixel. The max_extra_noise parameter specifies how much extra noise per pixel is allowed because of this approximation. A typical value for this might be max_extra_noise = sky_level / 100 where sky_level is the flux per pixel due to the sky. Note that this uses a “variance” definition of noise, not a “sigma” definition. [default: 0.]

  • poisson_flux – Whether to allow total object flux scaling to vary according to Poisson statistics for n_photons samples when photon shooting. [default: True, unless n_photons is given, in which case the default is False]

  • sensor – An optional Sensor instance, which will be used to accumulate the photons onto the image. [default: None]

  • photon_ops – A list of operators that can modify the photon array that will be applied in order before accumulating the photons on the sensor. [default: None]

  • n_subsample – The number of sub-pixels per final pixel to use for fft drawing when using a sensor. The sensor step needs to know the sub-pixel positions of the photons, which is lost in the fft method. So using smaller pixels for the fft step keeps some of that information, making the assumption of uniform flux per pixel less bad of an approximation. [default: 3]

  • maxN

    Sets the maximum number of photons that will be added to the image at a time. (Memory requirements are proportional to this number.)

    Note

    Using this parameter will not necessarily produce identical results as when not using it due to potentially different order of various random number generations in either the photon_ops, or the sensor, or (for method=’fft’) the conversion of the FFT image to photons.

    [default: None, which means no limit]

  • save_photons – If True, save the PhotonArray as image.photons. Only valid if method is ‘phot’ or sensor is not None. [default: False]

  • bandpass – This parameter is ignored, but is allowed to enable duck typing eqivalence between this method and the ChromaticObject.drawImage method. [default: None]

  • setup_only – Don’t actually draw anything on the image. Just make sure the image is set up correctly. This is used internally by GalSim, but there may be cases where the user will want the same functionality. [default: False]

Returns:

the drawn Image.

drawKImage(image=None, nx=None, ny=None, bounds=None, scale=None, add_to_image=False, recenter=True, bandpass=None, setup_only=False)[source]

Draws the k-space (complex) Image of the object, with bounds optionally set by input Image instance.

Normalization is always such that image(0,0) = flux. Unlike the real-space drawImage function, the (0,0) point will always be one of the actual pixel values. For even-sized images, it will be 1/2 pixel above and to the right of the true center of the image.

Another difference from drawImage is that a wcs other than a simple pixel scale is not allowed. There is no wcs parameter here, and if the images have a non-trivial wcs (and you don’t override it with the scale parameter), a TypeError will be raised.

Also, there is no convolution by a pixel. This is just a direct image of the Fourier transform of the surface brightness profile.

Parameters:
  • image – If provided, this will be the Image onto which to draw the k-space image. If image is None, then an automatically-sized image will be created. If image is given, but its bounds are undefined, then it will be resized appropriately based on the profile’s size. [default: None]

  • nx – If provided and image is None, use to set the x-direction size of the image. Must be accompanied by ny.

  • ny – If provided and image is None, use to set the y-direction size of the image. Must be accompanied by nx.

  • bounds – If provided and image is None, use to set the bounds of the image.

  • scale – If provided, use this as the pixel scale, dk, for the images. If scale is None and image is given, then take the provided images’ pixel scale (which must be equal). If scale is None and image is None, then use the Nyquist scale. If scale <= 0 (regardless of image), then use the Nyquist scale. [default: None]

  • add_to_image – Whether to add to the existing images rather than clear out anything in the image before drawing. Note: This requires that image be provided and that it has defined bounds. [default: False]

  • recenter – Whether to recenter the image to put k = 0 at the center (True) or to trust the provided bounds (False). [default: True]

  • bandpass – This parameter is ignored, but is allowed to enable duck typing eqivalence between this method and the ChromaticObject.drawImage method. [default: None]

  • setup_only – Don’t actually draw anything on the image. Just make sure the image is set up correctly. This is used internally by GalSim, but there may be cases where the user will want the same functionality. [default: False]

Returns:

an Image instance (created if necessary)

drawPhot(image, gain=1.0, add_to_image=False, n_photons=0, rng=None, max_extra_noise=0.0, poisson_flux=None, sensor=None, photon_ops=(), maxN=None, orig_center=galsim.PositionI(x=0, y=0), local_wcs=None, surface_ops=None)[source]

Draw this profile into an Image by shooting photons.

This is usually called from the drawImage function, rather than called directly by the user. In particular, the input image must be already set up with defined bounds. The profile will be drawn centered on whatever pixel corresponds to (0,0) with the given bounds, not the image center (unlike drawImage). The image also must have a PixelScale wcs. The profile being drawn should have already been converted to image coordinates via:

>>> image_profile = original_wcs.toImage(original_profile)

Note that the image produced by drawPhot represents the profile integrated over the area of each pixel. This is equivalent to convolving the profile by a square Pixel profile and sampling the value at the center of each pixel, although this happens automatically by the shooting algorithm, so you do not need to manually convolve by a Pixel as you would for drawReal or drawFFT.

Parameters:
  • image – The Image onto which to place the flux. [required]

  • gain – The number of photons per ADU (“analog to digital units”, the units of the numbers output from a CCD). [default: 1.]

  • add_to_image – Whether to add to the existing images rather than clear out anything in the image before drawing. [default: False]

  • n_photons – If provided, the number of photons to use for photon shooting. If not provided (i.e. n_photons = 0), use as many photons as necessary to result in an image with the correct Poisson shot noise for the object’s flux. For positive definite profiles, this is equivalent to n_photons = flux. However, some profiles need more than this because some of the shot photons are negative (usually due to interpolants). [default: 0]

  • rng – If provided, a random number generator to use for photon shooting, which may be any kind of BaseDeviate object. If rng is None, one will be automatically created, using the time as a seed. [default: None]

  • max_extra_noise – If provided, the allowed extra noise in each pixel when photon shooting. This is only relevant if n_photons=0, so the number of photons is being automatically calculated. In that case, if the image noise is dominated by the sky background, then you can get away with using fewer shot photons than the full n_photons = flux. Essentially each shot photon can have a flux > 1, which increases the noise in each pixel. The max_extra_noise parameter specifies how much extra noise per pixel is allowed because of this approximation. A typical value for this might be max_extra_noise = sky_level / 100 where sky_level is the flux per pixel due to the sky. Note that this uses a “variance” definition of noise, not a “sigma” definition. [default: 0.]

  • poisson_flux – Whether to allow total object flux scaling to vary according to Poisson statistics for n_photons samples when photon shooting. [default: True, unless n_photons is given, in which case the default is False]

  • sensor – An optional Sensor instance, which will be used to accumulate the photons onto the image. [default: None]

  • photon_ops – A list of operators that can modify the photon array that will be applied in order before accumulating the photons on the sensor. [default: ()]

  • maxN – Sets the maximum number of photons that will be added to the image at a time. (Memory requirements are proportional to this number.) [default: None, which means no limit]

  • orig_center – The position of the image center in the original image coordinates. [default: (0,0)]

  • local_wcs – The local wcs in the original image. [default: None]

Returns:

  • added_flux is the total flux of photons that landed inside the image bounds, and

  • photons is the PhotonArray that was applied to the image.

Return type:

(added_flux, photons) where

drawReal(image, add_to_image=False)[source]

Draw this profile into an Image by direct evaluation at the location of each pixel.

This is usually called from the drawImage function, rather than called directly by the user. In particular, the input image must be already set up with defined bounds. The profile will be drawn centered on whatever pixel corresponds to (0,0) with the given bounds, not the image center (unlike drawImage). The image also must have a PixelScale wcs. The profile being drawn should have already been converted to image coordinates via:

>>> image_profile = original_wcs.toImage(original_profile)

Note that the image produced by drawReal represents the profile sampled at the center of each pixel and then multiplied by the pixel area. That is, the profile is NOT integrated over the area of the pixel. This is equivalent to method=’no_pixel’ in drawImage. If you want to render a profile integrated over the pixel, you can convolve with a Pixel first and draw that.

Parameters:
  • image – The Image onto which to place the flux. [required]

  • add_to_image – Whether to add flux to the existing image rather than clear out anything in the image before drawing. [default: False]

Returns:

The total flux drawn inside the image bounds.

expand(scale)[source]

Expand the linear size of the profile by the given scale factor, while preserving surface brightness.

e.g. half_light_radius <– half_light_radius * scale

This doesn’t correspond to either of the normal operations one would typically want to do to a galaxy. The functions dilate() and magnify() are the more typical usage. But this function is conceptually simple. It rescales the linear dimension of the profile, while preserving surface brightness. As a result, the flux will necessarily change as well.

See dilate() for a version that applies a linear scale factor while preserving flux.

See magnify() for a version that applies a scale factor to the area while preserving surface brightness.

Parameters:

scale – The factor by which to scale the linear dimension of the object.

Returns:

the expanded object.

property flux

The flux of the profile.

getGoodImageSize(pixel_scale)[source]

Return a good size to use for drawing this profile.

The size will be large enough to cover most of the flux of the object. Specifically, at least (1-gsparams.folding_threshold) (i.e. 99.5% by default) of the flux should fall in the image.

Also, the returned size is always an even number, which is usually desired in practice. Of course, if you prefer an odd-sized image, you can add 1 to the result.

Parameters:

pixel_scale – The desired pixel scale of the image to be built.

Returns:

N, a good (linear) size of an image on which to draw this object.

property gsparams

A GSParams object that sets various parameters relevant for speed/accuracy trade-offs.

property has_hard_edges

Whether there are any hard edges in the profile, which would require very small k spacing when working in the Fourier domain.

property is_analytic_k

Whether the k-space values can be determined immediately at any position without requiring a Discrete Fourier Transform.

property is_analytic_x

Whether the real-space values can be determined immediately at any position without requiring a Discrete Fourier Transform.

property is_axisymmetric

Whether the profile is axially symmetric; affects efficiency of evaluation.

kValue(*args, **kwargs)[source]

Returns the value of the object at a chosen 2D position in k space.

This function returns the amplitude of the fourier transform of the surface brightness profile at a given position in k space. The position argument may be provided as a Position argument, or it may be given as kx,ky (either as a tuple or as two arguments).

Technically, kValue() is available if and only if the given obj has obj.is_analytic_k == True, but this is the case for all GSObject classes currently, so that should never be an issue (unlike for xValue).

Parameters:

position – The position in k space at which you want the fourier amplitude.

Returns:

the amplitude of the fourier transform at that position.

lens(g1, g2, mu)[source]

Create a version of the current object with both a lensing shear and magnification applied to it.

This GSObject method applies a lensing (reduced) shear and magnification. The shear must be specified using the g1, g2 definition of shear (see Shear for more details). This is the same definition as the outputs of the PowerSpectrum and NFWHalo classes, which compute shears according to some lensing power spectrum or lensing by an NFW dark matter halo. The magnification determines the rescaling factor for the object area and flux, preserving surface brightness.

Parameters:
  • g1 – First component of lensing (reduced) shear to apply to the object.

  • g2 – Second component of lensing (reduced) shear to apply to the object.

  • mu – Lensing magnification to apply to the object. This is the factor by which the solid angle subtended by the object is magnified, preserving surface brightness.

Returns:

the lensed object.

magnify(mu)[source]

Create a version of the current object with a lensing magnification applied to it, scaling the area and flux by mu at fixed surface brightness.

This process applies a lensing magnification mu, which scales the linear dimensions of the image by the factor sqrt(mu), i.e., half_light_radius <– half_light_radius * sqrt(mu) while increasing the flux by a factor of mu. Thus, magnify() preserves surface brightness.

See dilate() for a version that applies a linear scale factor while preserving flux.

See expand() for a version that applies a linear scale factor while preserving surface brightness.

Parameters:

mu – The lensing magnification to apply.

Returns:

the magnified object.

makePhot(n_photons=0, rng=None, max_extra_noise=0.0, poisson_flux=None, photon_ops=(), local_wcs=None, surface_ops=None)[source]

Make photons for a profile.

This is equivalent to drawPhot, except that the photons are not placed onto an image. Instead, it just returns the PhotonArray.

Note

The (x,y) positions returned are in the same units as the distance units of the GSObject being rendered. If you want (x,y) in pixel coordinates, you should call this function for the profile in image coordinates:

>>> photons = image.wcs.toImage(obj).makePhot()

Or if you just want a simple pixel scale conversion from sky coordinates to image coordinates, you can instead do

>>> photons = obj.makePhot()
>>> photons.scaleXY(1./pixel_scale)
Parameters:
  • n_photons – If provided, the number of photons to use for photon shooting. If not provided (i.e. n_photons = 0), use as many photons as necessary to result in an image with the correct Poisson shot noise for the object’s flux. For positive definite profiles, this is equivalent to n_photons = flux. However, some profiles need more than this because some of the shot photons are negative (usually due to interpolants). [default: 0]

  • rng – If provided, a random number generator to use for photon shooting, which may be any kind of BaseDeviate object. If rng is None, one will be automatically created, using the time as a seed. [default: None]

  • max_extra_noise – If provided, the allowed extra noise in each pixel when photon shooting. This is only relevant if n_photons=0, so the number of photons is being automatically calculated. In that case, if the image noise is dominated by the sky background, then you can get away with using fewer shot photons than the full n_photons = flux. Essentially each shot photon can have a flux > 1, which increases the noise in each pixel. The max_extra_noise parameter specifies how much extra noise per pixel is allowed because of this approximation. A typical value for this might be max_extra_noise = sky_level / 100 where sky_level is the flux per pixel due to the sky. Note that this uses a “variance” definition of noise, not a “sigma” definition. [default: 0.]

  • poisson_flux – Whether to allow total object flux scaling to vary according to Poisson statistics for n_photons samples when photon shooting. [default: True, unless n_photons is given, in which case the default is False]

  • photon_ops – A list of operators that can modify the photon array that will be applied in order before accumulating the photons on the sensor. [default: ()]

  • local_wcs – The local wcs in the original image. [default: None]

Returns:

property max_sb

An estimate of the maximum surface brightness of the object.

Some profiles will return the exact peak SB, typically equal to the value of obj.xValue(obj.centroid). However, not all profiles (e.g. Convolution) know how to calculate this value without just drawing the image and checking what the maximum value is. Clearly, this would be inefficient, so in these cases, some kind of estimate is returned, which will generally be conservative on the high side.

This routine is mainly used by the photon shooting process, where an overestimate of the maximum surface brightness is acceptable.

Note, for negative-flux profiles, this will return the absolute value of the most negative surface brightness. Technically, it is an estimate of the maximum deviation from zero, rather than the maximum value. For most profiles, these are the same thing.

property maxk

The value of k beyond which aliasing can be neglected.

property negative_flux

Returns the expectation value of flux in negative photons.

Some profiles, when rendered with photon shooting, need to shoot both positive- and negative-flux photons. For such profiles, this method returns the total absolute flux of the negative-valued photons (i.e. as a positive value).

For profiles that don’t have this complication, this returns 0.

It should be generally true that obj.positive_flux - obj.negative_flux returns the same thing as obj.flux. Small difference may accrue from finite numerical accuracy in cases involving lookup tables, etc.

property noise

An estimate of the noise already in the profile.

Some profiles have some noise already in their definition. E.g. those that come from observations of galaxies in real data. In GalSim, RealGalaxy objects are an example of this. In these cases, the noise attribute gives an estimate of the Noise object that would generate noise consistent with that already in the profile.

It is permissible to attach a noise estimate to an existing object with:

>>> obj.noise = noise    # Some BaseNoise instance
property nyquist_scale

The pixel spacing that does not alias maxk.

property positive_flux

The expectation value of flux in positive photons.

Some profiles, when rendered with photon shooting, need to shoot both positive- and negative-flux photons. For such profiles, this method returns the total flux of the positive-valued photons.

For profiles that don’t have this complication, this is equivalent to getFlux().

It should be generally true that obj.positive_flux - obj.negative_flux returns the same thing as obj.flux. Small difference may accrue from finite numerical accuracy in cases involving lookup tables, etc.

rotate(theta)[source]

Rotate this object by an Angle theta.

Parameters:

theta – Rotation angle (Angle object, positive means anticlockwise).

Returns:

the rotated object.

shear(*args, **kwargs)[source]

Create a version of the current object with an area-preserving shear applied to it.

The arguments may be either a Shear instance or arguments to be used to initialize one.

For more details about the allowed keyword arguments, see the Shear docstring.

The shear() method precisely preserves the area. To include a lensing distortion with the appropriate change in area, either use shear() with magnify(), or use lens(), which combines both operations.

Parameters:

shear – The Shear to be applied. Or, as described above, you may instead supply parameters do construct a shear directly. eg. obj.shear(g1=g1,g2=g2).

Returns:

the sheared object.

shift(*args, **kwargs)[source]

Create a version of the current object shifted by some amount in real space.

After this call, the caller’s type will be a GSObject. This means that if the caller was a derived type that had extra methods or properties beyond those defined in GSObject (e.g. Gaussian.sigma), then these methods are no longer available.

Note: in addition to the dx,dy parameter names, you may also supply dx,dy as a tuple, or as a Position object.

The shift coordinates here are sky coordinates. GSObject profiles are always defined in sky coordinates and only later (when they are drawn) is the connection to pixel coordinates established (via a pixel_scale or WCS). So a shift of dx moves the object horizontally in the sky (e.g. west in the local tangent plane of the observation), and dy moves the object vertically (north in the local tangent plane).

The units are typically arcsec, but we don’t enforce that anywhere. The units here just need to be consistent with the units used for any size values used by the GSObject. The connection of these units to the eventual image pixels is defined by either the pixel_scale or the wcs parameter of GSObject.drawImage.

Note: if you want to shift the object by a set number (or fraction) of pixels in the drawn image, you probably want to use the offset parameter of GSObject.drawImage rather than this method.

Parameters:
  • dx – Horizontal shift to apply.

  • dy – Vertical shift to apply.

Alternatively, you may supply a single parameter as a Position instance, rather than the two components separately if that is more convenient.

Parameter:

offset: The shift to apply, given as PositionD(dx,dy) or PositionI(dx,dy)

Returns:

the shifted object.

shoot(n_photons, rng=None)[source]

Shoot photons into a PhotonArray.

Parameters:
  • n_photons – The number of photons to use for photon shooting.

  • rng – If provided, a random number generator to use for photon shooting, which may be any kind of BaseDeviate object. If rng is None, one will be automatically created, using the time as a seed. [default: None]

Returns:

A PhotonArray.

property stepk

The sampling in k space necessary to avoid folding of image in x space.

transform(dudx, dudy, dvdx, dvdy)[source]

Create a version of the current object with an arbitrary Jacobian matrix transformation applied to it.

This applies a Jacobian matrix to the coordinate system in which this object is defined. It changes a profile defined in terms of (x,y) to one defined in terms of (u,v) where:

u = dudx x + dudy y v = dvdx x + dvdy y

That is, an arbitrary affine transform, but without the translation (which is easily effected via the shift method).

Note that this function is similar to expand in that it preserves surface brightness, not flux. If you want to preserve flux, you should also do:

>>> prof *= 1./abs(dudx*dvdy - dudy*dvdx)
Parameters:
  • dudx – du/dx, where (x,y) are the current coords, and (u,v) are the new coords.

  • dudy – du/dy, where (x,y) are the current coords, and (u,v) are the new coords.

  • dvdx – dv/dx, where (x,y) are the current coords, and (u,v) are the new coords.

  • dvdy – dv/dy, where (x,y) are the current coords, and (u,v) are the new coords.

Returns:

the transformed object

withFlux(flux)[source]

Create a version of the current object with a different flux.

This function is equivalent to obj.withScaledFlux(flux / obj.flux).

It creates a new object that has the same profile as the original, but with the surface brightness at every location rescaled such that the total flux will be the given value. Note that if flux is an SED, the return value will be a ChromaticObject with specified SED.

Parameters:

flux – The new flux for the object.

Returns:

the object with the new flux

withGSParams(gsparams=None, **kwargs)[source]

Create a version of the current object with the given GSParams.

You may either provide a GSParams instance:

>>> gsparams = galsim.GSParams(folding_threshold=1.e-4, maxk_threshold=1.e-4)
>>> obj = obj.withGSParams(gsparams)

or one or more named parameters as keyword arguments:

>>> obj = obj.withGSParams(folding_threshold=1.e-4, maxk_threshold=1.e-4)

Note

The latter style will leave all non-named parameters at their current values. It only updates the named parameters to the given values.

withScaledFlux(flux_ratio)[source]

Create a version of the current object with the flux scaled by the given flux_ratio.

This function is equivalent to obj.withFlux(flux_ratio * obj.flux). Indeed, withFlux() is implemented in terms of this one.

It creates a new object that has the same profile as the original, but with the surface brightness at every location scaled by the given amount. If flux_ratio is an SED, then the returned object is a ChromaticObject with the SED multiplied by its current flux.

Note that in this case the flux attribute of the GSObject being scaled gets interpreted as being dimensionless, instead of having its normal units of [photons/s/cm^2]. The photons/s/cm^2 units are (optionally) carried by the SED instead, or even left out entirely if the SED is dimensionless itself (see discussion in the ChromaticObject docstring). The GSObject flux attribute does still contribute to the ChromaticObject normalization, though. For example, the following are equivalent:

>>> chrom_obj = gsobj.withScaledFlux(sed * 3.0)
>>> chrom_obj2 = (gsobj * 3.0).withScaledFlux(sed)

An equivalent, and usually simpler, way to effect this scaling is:

>>> obj = obj * flux_ratio
Parameters:

flux_ratio – The ratio by which to rescale the flux of the object when creating a new one.

Returns:

the object with the new flux.

xValue(*args, **kwargs)[source]

Returns the value of the object at a chosen 2D position in real space.

This function returns the surface brightness of the object at a particular position in real space. The position argument may be provided as a PositionD or PositionI argument, or it may be given as x,y (either as a tuple or as two arguments).

The object surface brightness profiles are typically defined in world coordinates, so the position here should be in world coordinates as well.

Not all GSObject classes can use this method. Classes like Convolution that require a Discrete Fourier Transform to determine the real space values will not do so for a single position. Instead a GalSimError will be raised. The xValue() method is available if and only if obj.is_analytic_x == True.

Users who wish to use the xValue() method for an object that is the convolution of other profiles can do so by drawing the convolved profile into an image, using the image to initialize a new InterpolatedImage, and then using the xValue() method for that new object.

Parameters:

position – The position at which you want the surface brightness of the object.

Returns:

the surface brightness at that position.