METHOD FOR SCANNING ALONG A 3-DIMENSIONAL LINE AND METHOD FOR SCANNING A REGION OF INTEREST BY SCANNING A PLURALITY OF 3-DIMENSIONAL LINES

Information

  • Patent Application
  • 20190285865
  • Publication Number
    20190285865
  • Date Filed
    March 01, 2019
    5 years ago
  • Date Published
    September 19, 2019
    5 years ago
Abstract
The invention relates to a method for scanning along a substantially straight line (3D line) lying at an arbitrary direction in a 3D space with a given speed using a 3D laser scanning microscope having a first pair of acousto-optic deflectors deflecting a laser beam in the x-z plane (x axis deflectors) and a second pair of acousto-optic deflectors deflecting the laser beam in the y-z plane (y axis deflectors) for focusing the laser beam in 3D.
Description
TECHNICAL FIELD

The present invention relates to a method for scanning along a substantially straight line (3D line) lying at an arbitrary direction in a 3D space with a given speed using a 3D laser scanning microscope.


The invention further relates to a method for scanning a region of interest with a 3D laser scanning microscope having acousto-optic deflectors for focusing a laser beam within a 3D space


BACKGROUND OF INVENTION

Neuronal diversity, layer specificity of information processing, area wise specialization of neural mechanisms, internally generated patterns, and dynamic network properties all show that understanding neural computation requires fast read out of information flow and processing, not only from a single plane or point, but at the level of large neuronal populations situated in large 3D volumes. Moreover, coding and computation within neuronal networks are formed not only by the somatic integration domains, but also by highly non-linear dendritic integration centers which, in most cases, remain hidden from somatic recordings. Therefore, it would be desirable to simultaneously read out neural activity at both the population and single cell levels. Moreover, it has recently been shown that neuronal signaling could be completely different in awake and behaving animals. Therefore novel methods are needed which can simultaneously record activity patterns of neuronal, dendritic, spinal, and axon assemblies with high spatial and temporal resolution in large scanning volumes in the brain of behaving animals.


Several new optical methods have recently been developed for the fast readout of neuronal network activity in 3D. Among the available 3D scanning solutions for multiphoton microscopy, 3D AO scanning is capable of performing 3D random-access point scanning (Katona G, Szalay G, Maak P, Kaszas A, Veress M, Hillier D, Chiovini B, Vizi E S, Roska B, Rozsa B (2012); Fast two-photon in vivo imaging with three-dimensional random-access scanning in large tissue volumes. Nature methods 9:201-208) to increase the measurement speed and signal collection efficiency by several orders of magnitude in comparison to classical raster scanning. This is because the pre-selected regions of interest (ROI) can be precisely and rapidly targeted without wasting measurement time for unnecessary background volumes. More quantitatively, 3D AO scanning increases the product of the measurement speed and the square of the signal-to-noise ratio with the ratio of the total image volume to the volume covered by the pre-selected scanning points. This ratio can be very large, about 106-108 per ROI, compared to traditional raster scanning of the same sample volume.


Despite the evident benefits of 3D random-access AO microscopy, the method faces two major technical limitations: i) fluorescence data are lost or contaminated with large amplitude movement artifacts during in vivo recordings; and ii) sampling rate is limited by the large optical aperture size of AO deflectors, which must be filled by an acoustic wave to address a given scanning point. The first technical limitation occurs because the actual location of the recorded ROIs is continuously changing during in vivo measurements due to tissue movement caused by heartbeats, blood flow in nearby vessels, respiration, and physical motion. This results in fluorescence artifacts because of the spatial inhomogeneity in the baseline fluorescence signal of all kinds of fluorescent labelling. Moreover, there is also a spatial inhomogeneity in relative fluorescence changes within recorded compartments; therefore, measurement locations within a somata or dendritic compartment are not equivalent. In addition, the amplitudes of motion-induced transients can even be larger than the ones induced by one or a few action potentials detected by genetically encoded calcium indicators (GECIs). Moreover, the kinetics of Ca2+ transients and motion artifacts could also be very similar. Therefore it is really difficult to separate post-hoc the genuine fluorescence changes associated with neural activity from the artifacts caused by brain movement. The second technical problem with 3D point-by-point scanning is the relatively long switching time, which limits either the measurement speed or the number of ROIs. This is because to achieve large scanning volumes with a high spatial resolution, large AO deflector apertures are needed. However, to fill these large apertures with an acoustic signal takes considerable time. Therefore, the resulting long-duration AO switching time does not allow volume or surface elements to be generated from single points in an appropriate time period.


The robust performance of 3D point-by-point scanning performed with AO microscopes has been demonstrated in earlier works in slice preparations or in anesthetized animals. In these studies, 3D scanning was achieved by using two groups of x and y deflectors. During focusing, the second x (and y) deflector's driver function was supplemented with counter propagating, acoustic waves with a linearly increasing (chirped) frequency programmed to fully compensate for the lateral drift of the focal spot—this drift would otherwise be caused by the continuously increasing mean acoustic frequency in the chirped wave. In this way, the point scanning method yields high pointing stability but requires relatively long switching times, because it is necessary to fill the large AO deflector apertures each time when addressing a new point in 3D.


An alternative continuous trajectory scanning method (Katona G, Szalay G, Maak P, Kaszas A, Veress M, Hillier D, Chiovini B, Vizi E S, Roska B, Rozsa B (2012); Fast two-photon in vivo imaging with three-dimensional random-access scanning in large tissue volumes. Nature methods 9:201-208) allows shorter pixel dwell times, but in this case, the fast lateral scans are restricted to two dimensions; 3D trajectory scans, however, still need to be interrupted by time-consuming jumps when moving along the z axis. In other words, scanning along the z axis still suffers from the same limitation as during point-by-point scanning.


It is an objective of the present invention to overcome the problems associated with the prior art. In particular, it is an objective of the invention to generalize the previous methods by deriving a one-to-one relationship between the focal spot coordinates and speed, and the chirp parameters of the four AO deflectors to allow fast scanning drifts with the focal spot not only in the horizontal plane, but also along any 3D line, starting at any point in the scanning volume (3D drift AO scanning).


SUMMARY OF INVENTION

These objectives are achieved by a method for scanning along a substantially straight line (3D line) lying at an arbitrary direction in a 3D space with a given speed using a 3D laser scanning microscope having a first pair of acousto-optic deflectors deflecting a laser beam in the x-z plane (x axis deflectors) and a second pair of acousto-optic deflectors deflecting the laser beam in the y-z plane (y axis deflectors) for focusing the laser beam in 3D, the method comprising:


determining the coordinates x0(0), y0(0), z0(0) of one end of the 3D line serving as the starting point,


determining scanning speed vector components vx0, vy0, vzx0 (=vzy0) such that the magnitude of the scanning speed vector corresponds to the given scanning speed and the directions of the scanning speed vector corresponds to the direction of the 3D line,


providing non-linear chirp signals in the x axis deflectors according to the function:








f
ix



(

x
,
t

)


=



f
ix



(

0
,
0

)


+


(



b
xi

*

(

t
-

D

2
*

v
a



-


x
i


v
a



)


+

c
xi


)

*

(

t
-

D

2
*

v
a



-


x
i


v
a



)







wherein


i=1 or 2 indicates the first and second x axis deflector respectively, D is the diameter of the AO deflector; and va is the propagation speed of the acoustic wave within the deflector and





Δƒ0x1x(0,0)−ƒ2x(0,0))≠0


and providing non-linear chirp signals in the y axis deflectors according to the function:








f
iy



(

y
,
t

)


=



f
iy



(

0
,
0

)


+


(



b
yi

*

(

t
-

D

2
*

v
a



-


y
i


v
a



)


+

c
yi


)

*

(

t
-

D

2
*

v
a



-


y
i


v
a



)







wherein


i=1 or 2 indicates the first and second x axis deflector respectively, and





Δƒ0y1y(0,0)−ƒ2y(0,0))≠0


wherein Δf0x, bx1, bx2, cx1, cx2, Δf0y, by1, by2, cy1, and cy2 are expressed as a function of the initial location (x0(0), y0(0), z0(0)) and vector speed (vx0, vy0, vzx0=vzy0) of the focal spot.


In the context of the present invention a 3D line is a line that has a non-zero dimension along the optical axis (z axis) of the microscope and a non-zero dimension along a plane perpendicular to the optical axis (x-y plane). Accordingly a line that is parallel with the optical axis is not considered a 3D line nor a line lying purely in an x-y plane. The equation of the line can be described by a set of linear equations whose parameters of the line path are selected according to the general formula, in 3D:






x
0
=x
0(0)+s*vx0






y
0
=y
0(0)+s*vy0






z
0
=z
0(0)+s*vz0


Since the deflectors are deflecting in the x-z and y-z planes, these equations can be transformed into the equations describing the line projections onto the x-z and y-z planes:







z
0

=



m
*

x
0


+
n

=



z
0



(
0
)


+



v

zx





0



v

x





0



*

x
0


-



v

zx





0



v

x





0



*


x
0



(
0
)












z
0

=



k
*

x
0


+
l

=



z
0



(
0
)


+



v
zy0


v
y0


*

y
0


-



v

zy





0



v

y





0



*


y
0



(
0
)









With these we imply that the initial velocity values vzx0=vzy0=vz0, and the parameters m, n, k, l are determined by the initial velocity values vx0, vy0, vz0 along the x, y, z axes:






m
=


v

zx





0



v

x





0









k
=


v

zx





0



v

y





0









n
=



z
0



(
0
)


-



v

zx





0



v

x





0



*


x
0



(
0
)










l
=



z
0



(
0
)


-



v

zx





0



v

y





0



*


y
0



(
0
)








Preferably, the parameters Δf0x, bx1, bx2, cx1, cx2, Δf0y, by1, by2, cy1, and cy2 are expressed as












Δ






f

0





x



=




x
0



(
0
)


*

F
2



K
*

F
obj

*

F
1
















b

x





1


=




v

zx





0


*

v
a



4
*
K


*


(


M
+



v

zx





0



v

x





0



*




x
0



(
0
)


*

F
2




F
obj

*

F
1








z
0



(
0
)


-



v

zx





0



v

x





0



*


x
0



(
0
)





)

2















b

x





2


=




v

zx





0


*

v
a



4
*
K


*


(


M
+



v

zx





0



v

x





0



*




x
0



(
0
)


*

F
2




F
obj

*

F
1








z
0



(
0
)


-



v

zx





0



v

x





0



*


x
0



(
0
)





)

2










c

x





1


=




M
*

v
a



2
*
K


*

(



M
+



v

zx





0



v

x





0



*




x
0



(
0
)


*

F
2




F
obj

*

F
1








z
0



(
0
)


-



v

zx





0



v

x





0



*


x
0



(
0
)





-

M

F
obj



)


+



v

x





0



2
*
K
*
M


*

(



z
0



(
0
)


-



v

zx





0



v

x





0



*


x
0



(
0
)




)

*


(


M
+



v

zx





0



v

x





0



*




x
0



(
0
)


*

F
2




F
obj

*

F
1








z
0



(
0
)


-



v

zx





0



v

x





0



*


x
0



(
0
)





)

2










c

x





2


=




M
*

v
a



2
*
K


*

(



M
+



v

zx





0



v

x





0



*




x
0



(
0
)


*

F
2




F
obj

*

F
1








z
0



(
0
)


-



v

zx





0



v

x





0



*


x
0



(
0
)





-

M

F
obj



)


-




v

x





0


*

(



z
0



(
0
)


-



v

zx





0



v

x





0



*

x
0



(
0
)



)



2
*
K
*
M


*


(


M
+



v

zx





0



v

x





0



*




x
0



(
0
)


*

F
2




F
obj

*

F
1








z
0



(
0
)


-



v

zx





0



v

x





0



*


x
0



(
0
)





)

2















Δ






f

0





y



=




y
0



(
0
)


*

F
2



K
*

F
obj

*

F
1
















b

y





1


=




v

zy





0


*

v
a



4
*
K


*


(


M
+



v

zy





0



v

y





0



*




y
0



(
0
)


*

F
2




F
obj

*

F
1








z
0



(
0
)


-



v

zy





0



v

y





0



*


y
0



(
0
)





)

2















b

y





2


=




v

zy





0


*

v
a



4
*
K


*


(


M
+



v

zy





0



v

y





0



*




y
0



(
0
)


*

F
2




F
obj

*

F
1








z
0



(
0
)


-



v

zy





0



v

y





0



*


y
0



(
0
)





)

2










c

y





1


=




M
*

v
a



2
*
K


*

(



M
+



v

zy





0



v

y





0



*




y
0



(
0
)


*

F
2




F
obj

*

F
1








z
0



(
0
)


-



v

zy





0



v

y





0



*


y
0



(
0
)





-

M

F
obj



)


+




v

y





0


*

(



z
0



(
0
)


-



v

zy





0



v

y





0



*


y
0



(
0
)




)



2
*
K
*
M


*


(


M
+



v

zy





0



v

y





0



*




x
0



(
0
)


*

F
2




F
obj

*

F
1








z
0



(
0
)


-



v

zy





0



v

y





0



*


y
0



(
0
)





)

2










c

y





2


=




M
*

v
a



2
*
K


*

(



M
+



v

zy





0



v

y





0



*




y
0



(
0
)


*

F
2




F
obj

*

F
1








z
0



(
0
)


-



v

zy





0



v

y





0



*


y
0



(
0
)





-

M

F
obj



)


-




v

y





0


*

(



z
0



(
0
)


-



v

zy





0



v

y





0



*


y
0



(
0
)




)



2
*
K
*
M


*


(


M
+



v

zy





0



v

y





0



*




y
0



(
0
)


*

F
2




F
obj

*

F
1








z
0



(
0
)


-



v

zy





0



v

y





0



*


y
0



(
0
)





)

2







The present invention provides a novel method, 3D drift AO microscopy, in which, instead of keeping the same scanning position, the excitation spot is allowed to drift in any direction with any desired speed in 3D space while continuously recording fluorescence data with no limitation in sampling rate. To realize this, non-linear chirps are used in the AO deflectors with parabolic frequency profiles. The partial drift compensation realized with these parabolic frequency profiles allows the directed and continuous movement of the focal spot in arbitrary directions and with arbitrary velocities determined by the temporal shape of the chirped acoustic signals. During these fast 3D drifts of the focal spot the fluorescence collection is uninterrupted, lifting the pixel dwell time limitation of the previously used point scanning. In this way pre-selected individual scanning points can be extended to small 3D lines, surfaces, or volume elements to cover not only the pre-selected ROIs but also the neighbouring background areas or volume elements.


According to another aspect the invention provides a method for scanning a region of interest with a 3D laser scanning microscope having acousto-optic deflectors for focusing a laser beam within a 3D space defined by an optical axis (Z) of the microscope and X, Y axes that are perpendicular to the optical axis and to each other, the method comprising:

    • selecting guiding points along the region of interest,
    • fitting a 3D trajectory to the selected guiding points, extending each scanning point of the 3D trajectory to substantially straight lines (3D lines) lying in the 3D space such as to extend partly in the direction of the optical axis, which 3D lines are transversal to the 3D trajectory at the given scanning points and which straight lines, together, define a substantially continuous surface,
    • scanning each 3D line by focusing the laser beam at one end of the 3D line and providing non-linear chirp signals for the acoustic frequencies in the deflectors for continuously moving the focus spot along the 3D line.


The 3D lines may be for example of 5 to 20 μm length.


Preferably, the 3D lines are substantially perpendicular to the 3D trajectory.


Preferably, the method includes extending each scanning point of the 3D trajectory to a plurality of parallel substantially straight lines of 5 to 20 μm length defining surfaces that are substantially transversal to the 3D trajectory at the given scanning points.


Preferably, the method includes extending each scanning point of the 3D trajectory to a plurality of parallel substantially straight lines of 5 to 20 μm length which straight lines, together, define a substantially continuous volume such that the 3D trajectory is located inside this volume.


Preferably, the method includes extending each scanning point of the 3D trajectory to a plurality of parallel substantially straight lines of 5 to 20 μm length defining cuboides that are substantially centred on the 3D trajectory at the given scanning points.


Although there are several ways to extend single scanning points to surface and volume elements, the combinations of 3D lines, surfaces and volumes are almost unlimited, the inventors have found six new scanning methods that are particularly advantageous: 3D ribbon scanning; chessboard scanning; multi-layer, multi-frame imaging; snake scanning; multi-cube scanning; and multi-3D line scanning. Each of them is optimal for a different neurobiological aim.


Volume or area scanning used in these methods allows motion artifact correction on a fine spatial scale and, hence, the in vivo measurement of fine structures in behaving animals. Therefore, fluorescence information can be preserved from the pre-selected ROIs during 3D measurements even in the brain of behaving animals, while maintaining the 10-1000 Hz sampling rate necessary to resolve neural activity at the individual ROIs. It can be demonstrated that these scanning methods can decrease the amplitude of motion artifacts by over an order of magnitude and therefore enable the fast functional measurement of neuronal somata and fine neuronal processes, such as dendritic spines and dendrites, even in moving, behaving animals in a z-scanning range of more than 650 μm in 3D.


Further advantageous embodiments of the invention are defined in the attached dependent claims.





BRIEF DESCRIPTION OF DRAWINGS

Further details of the invention will be apparent from the accompanying figures and exemplary embodiments.



FIG. 1A is a schematic illustration of longitudinal and transversal scanning with a laser scanning acousto-optic microscope.



FIG. 1B are diagrams showing exemplified dendritic and spine transients which were recorded using 3D random-access point scanning during motion (light) and rest (dark) from one dendritic and one spine ROI indicated with white triangles in the inset.



FIG. 1C is a 3D image of a dendritic segment of a selected GCaMP6f-labelled neuron and a selected ribbon around the dendritic segment shown with dashed line.



FIG. 1D illustrates colour coded diagrams showing average Ca2+ responses along the ribbon of FIG. 1C during spontaneous activity using the longitudinal (left) and the transversal (right) scanning modes.



FIG. 2A is a diagram of brain motion recordings.



FIG. 2B shows a normalized amplitude histogram of the recorded brain motion. Inset shows average and average peak-to-peak displacements in the resting and running periods.



FIG. 2C shows the normalized change in relative fluorescence amplitude as a function of distance. Inset shows dendritic segment example.



FIG. 2D is an image of a soma of a GCaMP6f-labelled neuron (left), and normalized increase in signal-to-noise ratio (right).



FIG. 2E corresponds to FIG. 2D, but for dendritic recordings.



FIG. 2F shows the diagrams of brain motion recordings before and after motion correction.



FIG. 2G shows further examples for motion-artifact correction.



FIG. 2H shows somatic transients with and without motion correction.



FIG. 3A is a schematic perspective view of multiple dendritic segments.



FIG. 3B shows numbered frames in the x-y and x-z plans indicating twelve 3D ribbons used to simultaneously record twelve spiny dendritic segments.



FIG. 3C shows the results of fluorescence recordings made simultaneously along the 12 dendritic regions shown in FIG. 3B.



FIG. 3D shows Ca2+ transients derived from the 132 numbered regions highlighted in FIG. 3C.



FIG. 3E shows raster plots of activity pattern of the dendritic spines indicated in FIG. 3C.



FIG. 3F shows Ca2+ transients from the five exemplified dendritic spines indicated with numbers in FIG. 3C.



FIG. 3G shows raster plot of the activity pattern of the five dendritic spines from FIG. 3F.



FIG. 4A shows a schematic perspective illustration of chessboard scanning.



FIG. 4B is a schematic perspective view of the selected scanning regions.



FIG. 4C shows a schematic image of 136 somata during visual stimulation.



FIG. 4D shows representative somatic Ca2+ responses derived from the colour-coded regions in FIG. 4C following motion-artifact compensation.



FIG. 4E shows raster plot of average Ca2+ responses induced with moving grating stimulation into eight different directions from the colour coded neurons shown in FIG. 4C.



FIG. 4F is a schematic perspective view of multi-frame scanning.



FIG. 4G is a dendritic image of a selected GCaMP6f-labelled layer V pyramidal neuron selected from a sparsely labelled V1 network.



FIG. 4H is the x-z projection of the neuron shown in FIG. 4G, depicts simultaneously imaged dendritic and somatic Ca2+ responses.



FIG. 4I is a derived Ca2+ transients for each ROI.



FIG. 5A is a 3D view of a layer II/III neuron labelled with the GCamP6f sensor, where rectangles indicate four simultaneously imaged layers.



FIG. 5B shows average baseline fluorescence in the four simultaneously measured layers shown in FIG. 5A.



FIG. 5C shows somatic Ca2+ responses derived from the numbered yellow sub-regions shown in FIG. 5B following motion artifact elimination.



FIG. 5D shows averaged baseline fluorescence images from FIG. 5B.



FIG. 6A shows a schematic perspective illustration of snake scanning.



FIG. 6B is a z projection of a pyramidal neuron in V1 region labelled with GCaMP6f sensor using sparse labeling and shows selected dendritic segment at an enlarged scale.



FIG. 6C shows the results of fast snake scanning performed at 10 Hz in the selected dendritic region shown in FIG. 6B.



FIG. 6D is the same dendritic segment as in FIG. 6C, but the 3D volume is shown as x-y and z-y plane projections.



FIG. 6E shows a schematic perspective illustration of 3D multi-cube scanning.



FIG. 6F shows volume-rendered image of 10 representative cubes selecting individual neuronal somata for simultaneous 3D volume imaging.



FIG. 6G shows Ca2+ transients derived from the 10 cubes shown in FIG. 6F following 3D motion correction.



FIG. 7A shows a schematic perspective illustration of multi-3D line scanning.



FIG. 7B shows amplitude of brain motion, average motion direction is shown by arrow.



FIG. 7C is a z projection of a layer 2/3 pyramidal cell, labelled with GCaMP6f, white lines indicate the scanning line running through 164 pre-selected spines.



FIG. 7D shows single raw Ca2+ response recorded along 14 spines using multi-3D line scanning.



FIG. 7E shows exemplified spine Ca2+ transients induced by moving grating stimulation in four different directions.



FIG. 7F shows selected Ca2+ transients measured using point scanning (left) and multi-3D line scanning (right).



FIG. 8 shows a schematic perspective illustration of the fast different 3D scanning methods according to the present invention.



FIG. 9 shows a schematic illustration of the optical geometry of a 3D scanner and focusing system.





SUMMARY OF REFERENCE NUMERALS















10
microscope


12
laser source


14
laser beam


16
acousto-optic deflector


18
objective


20
detector


26
sample









DESCRIPTION OF EMBODIMENTS

An exemplary laser scanning acousto-optic (AO) microscope 10 is illustrated in FIG. 1A which can be used to perform the method according to the invention. The AO microscope 10 comprises a laser source 12 providing a laser beam 14, acousto-optic deflectors 16 and an objective 18 for focusing the laser beam 14 on a sample, and one or more detectors 20 for detecting back scattered light and/or fluorescent light emitted by the sample. Other arrangement of the AO deflectors 16 is also possible as known in the art. Further optical elements (e.g. mirrors, beam splitters, Faraday isolator, dispersion compensation module, laser beam stabilisation module, beam expander, angular dispersion compensation module, etc.) may be provided for guiding the laser beam 14 to the AO deflectors 16 and the objective 18, and for guiding the back scattered and/or the emitted fluorescent light to the detectors 20 as is known in the art (see e.g. Katona et al. “Fast two-photon in vivo imaging with three-dimensional random-access scanning in large tissue volumes”, Nature methods 9:201-208; 2012). Naturally, a laser scanning microscope 10 with a different structure may also be used.


The laser source 12 used for two-photon excitation may be a femtosecond pulse laser, e.g. a mode-locked Ti:S laser, which produces the laser beam 14. In such a case the laser beam 14 consists of discrete laser pulses, which pulses have femtosecond pulse width and a repetition frequency in the MHz range.


Preferably a Faraday isolator is located in the optical path of the laser beam 14, which prevents the reflection of the laser beam, thereby aiding smoother output performance. After passing through the Faraday isolator, the laser beam 14 preferably passes into a dispersion compensation module, in which a pre-dispersion compensation is performed with prisms in a known way. After this, the laser beam 14 preferably passes through a beam stabilisation module, and a beam expander before reaching the AO deflectors 16.


The laser beam 14 deflected by the AO deflectors 16 preferably passes through an angular dispersion compensation module for compensating angular dispersion of the beam 14 as is known in the art. The objective 18 focuses the laser beam 14 onto a sample 26 placed after the objective 18. Preferably, a beam splitter is placed between the angular dispersion compensation module and the objective 18, which transmits a part of the laser beam 14 reflected from a sample 26 and or emitted by the sample 26 and collected by the objective 18 to the photomultiplier (PMT) detectors 20, as is known in the art.


According to the inventive method scanning points are extended to 3D lines and/or surfaces and/or volume elements in order to substantially increase the signal to noise ratio, which allows for performing measurements in vivo, e.g. in a moving brain.


The 3D drift AO scanning according to the invention allows not only for scanning individual points, but also for scanning along any segments of any 3D lines situated in any location in the entire scanning volume. Therefore, any folded surface (or volume) elements can be generated, for example from transversal or longitudinal lines as illustrated in FIG. 1A. In this way, fluorescence information can be continuously collected when scanning the entire 3D line in the same short period of time (≈20 μs) as required for single-point scanning in the point-by-point scanning mode. Data acquisition rate is limited only by the maximal sampling rate of the PMT detectors 20.


It is therefore possible to generate folded surface elements with the 3D drift AO scanning technology in 3D, and fit them to any arbitrary scanning trajectory, e.g. long, tortuous dendrite segments and branch points in an orientation which minimizes fluorescence loss during brain motion. This technique is referred to as 3D ribbon scanning (see FIG. 2C).


To achieve 3D ribbon scanning, the first step is to select guiding points along a region of interest (e.g. a dendritic segment or any other cellular structure).


The second step is to fit a 3D trajectory to these guiding points using e.g. piecewise cubic Hermite interpolation. Two preferred strategies to form ribbons along the selected 3D trajectory are to generate drifts (short scans during which the focus spot moves continuously) either parallel to the trajectory (longitudinal drifts), or orthogonal to the trajectory (transverse drifts) as illustrated in FIG. 1A. In both cases, it is preferred to maximize how parallel these surface elements lie to the plane of brain motion or to the nominal focal plane of the objective. The basic idea behind the latter is that the point spread function is elongated along the z axis: fluorescence measurements are therefore less sensitive to motion along the z axis. Therefore, it is also possible to follow this second strategy and generate multiple x-y frames for neuronal network and neuropil measurements (see below).


In the following, the implementation and efficiency of the different scanning strategies will be demonstrated which can be performed by the 3D drift AO scanning method according to the present invention.


Example 1: 3D Ribbon Scanning to Compensate In Vivo Motion Artifacts

To demonstrate 3D ribbon scanning we labelled a small portion of pyramidal neurons in the V1 region of the visual cortex with a Ca2+ sensor, GCaMP6f, using an AAV vector for delivery. Then, according to the z-stack taken in advance, we selected guiding points and fitted the 3D trajectory which covered a spiny dendritic segment of a labelled pyramidal cell (FIG. 1C). FIG. 1C shows a 3D image of a dendritic segment of a selected GCaMP6f-labelled neuron. Cre-dependent GCaMP6f-expressing AAV vectors were used to induce sparse labelling. A 3D ribbon (indicated with dashed lines) was selected for fast 3D drift AO scanning within the cuboid


We used transversal drifts to scan along the selected 3D ribbons to measure the selected 140 μm dendritic segment and spines with 70.1 Hz (FIG. 1D). Raw fluorescence data (raw) were measured along the selected 3D ribbon and were projected into 2D along the longitudinal and traversal axes of the ribbon following elimination of motion artifacts. Average Ca2+ responses along the ribbon during spontaneous activity (syn.) were colour coded. Using longitudinal drifts allowed a much faster measurement (in range of from 139.3 Hz to 417.9 Hz) of the same dendritic segment because fewer (but longer) 3D lines was required to cover the same ROI. In the next step, 3D recorded data were projected into 2D as a function of perpendicular and transversal distances along the surface of the ribbon. Note that, in this way, the dendritic segment was straightened to a frame (FIG. 1D) to record its activity in 2D movies. This projection also allowed the use of an adapted version of prior art methods developed for motion artifact elimination in 2D scanning (see Greenberg D S, Kerr J N (2009) Automated correction of fast motion artifacts for two-photon imaging of awake animals. Journal of neuroscience methods 176:1-15).


The need to extend single scanning points to surface or volume elements in order to preserve the surrounding fluorescence information for motion artifact elimination is also indicated by the fact that fluorescence information could be completely lost during motion in behaving animals when using the point scanning method. FIG. 1B illustrates exemplified dendritic and spine transients which were recorded using 3D random-access point scanning during motion (light) and rest (dark) from one dendritic and one spine region of interest (ROI) indicated with white triangles in the inset. Note that fluorescence information can reach the background level in a running period, indicating that single points are not sufficient to monitor activity in moving, behaving animals.



FIGS. 2A-2H demonstrate the quantitative analysis of the motion artifact elimination capability of 3D drift AO scanning.


In order to quantify motion-induced errors and the efficiency of motion artifact correction during ribbon scanning, we first measured brain movement by rapidly scanning a bright, compact fluorescence object which was surrounded by a darker background region. To do this, we centred a small scanning volume, a cube, on the fluorescence object, and displacement was calculated from the x-y, x-z, and y-z projections while the examined mouse was running in a linear virtual maze. We separated resting and moving periods according to the simultaneously recorded locomotion information (FIGS. 2A and 2B). In the case of FIG. 2A brain motion was recorded at 12.8 Hz by volume imaging a bright, compact fluorescent object which was surrounded by a darker region. FIG. 2A shows exemplified transient of brain displacement projected on the x axis from a 225 s measurement period when the mouse was running (light) or resting (dark) in a linear maze. Movement periods of the head-restrained mice were detected by using the optical encoder of the virtual reality system.


Displacement data were separated into two intervals according to the recorded locomotion information (running in light colour and resting in dark colour) and a normalized amplitude histogram of brain motion was calculated for the two periods (see FIG. 2B). Inset shows average and average peak-to-peak displacements in the resting and running periods.



FIG. 2C shows the normalized change in relative fluorescence amplitude as a function of distance from the centre of GCaMP6f-labelled dendritic segments (ΔF/F(x), mean±SEM, n=3). Dashed lines indicate average peak-to-peak displacement values calculated for the resting and running periods, respectively. Note the >80% drop in ΔF/F amplitude for the average displacement value during running. The inset shows a dendritic segment example. ΔF/F was averaged along a dashed line and then the line was shifted and averaging was repeated to calculate ΔF/F(x).


Brain motion can induce fluorescence artifacts, because there is a spatial inhomogeneity in baseline fluorescence and also in the relative fluorescence signals (FIG. 2C). The amplitude of motion-generated fluorescence transients can be calculated by substituting the average peak-to-peak motion error into the histogram of the relative fluorescence change (FIG. 2C). The average ΔF/F(x) histogram was relatively sharp for dendrites (n=3, 100 μm long dendritic segments, FIG. 2C), therefore the average motion amplitude during running corresponds to a relatively large (80.1±3.1%, average of 150 cross section) drop in the fluorescence amplitude, which is about 34.4±15.6-fold higher than the average amplitude of a single AP-induced Ca2+ response. These data indicate the need for motion artifact compensation.


On the left of FIG. 2D an image of a soma of a GCaMP6f-labelled neuron can be seen. Points and dashed arrows indicate scanning points and scanning lines, respectively. On the right, FIG. 2D shows normalized increase in signal-to-noise ratio calculated for resting (dark) and running (light) periods in awake animals when scanning points were extended to scanning lines in somatic recordings, as shown on the left. Signal-to-noise ratio with point-by-point scanning is indicated with dashed line.



FIG. 2E demonstrates similar calculations as in Fig. D, but for dendritic recordings. Signal-to-nose ratio of point-by-point scanning of dendritic spines was compared to 3D ribbon-scanning during resting (dark) and running (light) periods. Note the more than 10-fold improvement when using 3D ribbon scanning.


Next, we analyzed the efficiency of our methods for motion correction during in vivo measurements. As before, we labelled neurons and their processes with a GCaMP6f sensor, used 3D ribbon scanning, and projected the recorded fluorescence data to movie frames. We got the best results when each frame of the video recorded along 3D ribbons was corrected by shifting the frames at subpixel resolution to maximize the fluorescence cross correlation between the successive frames (FIG. 2F). On the left of FIG. 2F exemplified individual Ca2+ transient can be seen from a single dendritic spine derived from a movie which was recorded with 3D ribbon scanning along a 49.2 μm spiny dendritic segment in a behaving mouse (raw trace). When Ca2+ transients were derived following motion-artifact correction performed at pixel and sub-pixel resolution, the motion-induced artifacts were eliminated and signal-to-noise ratio was improved.


Ribbon scanning and the successive frame shifts at subpixel resolution in running animals increased signal-to-noise ratio by 7.56±3.14-fold (p>0.000015, n=10) as compared to 3D random-access point scanning (FIG. 2G). FIG. 2G shows further examples for motion-artifact correction. At the top, a single frame can be seen from the movie recorded with 3D ribbon scanning from an awake mouse. At the bottom, exemplified Ca2+ transients can be seen that are derived from the recorded movie frames from the color-coded regions. As can be seen the signal-to-noise ratio of the transients improved when they were derived following motion-artifact correction with subpixel resolution. On the right, signal-to-noise ratio can be seen of spine Ca2+ transients calculated (100 transients, n=5/5 spines/mice). Transients are shown without and with motion correction at subpixel resolution.


Next we investigated separately the effect of the post-hoc frame shifts on the signal-to-nose ratio following ribbon scanning. Low-amplitude spine Ca2+ transients were barely visible when transients were derived from the raw video. For a precise analytical analysis we added the same 1, 2, 5, and 10 action-potential-induced average transients to the images of a spiny dendritic segment and a soma. Then we generated frame series by shifting each frame with the amplitude of brain motion recorded in advance (similarly to FIG. 2A). Finally, we recalculated Ca2+ transients from the frame series with and without using the motion-correction algorithm, using ROIs at the same size to compare the signal-to-noise ratio of point-by-point scanning and the motion-corrected 3D ribbon scanning on the same area. Our data indicate that 3D ribbon scanning, which, in contrast to point-by-point scanning, allows motion correction, can largely improve the signal-to-noise ratio in the case of small, 1-5 AP-associated signals recorded most frequently during in vivo measurements (11.33±7.92-fold, p>0.025, n=4 somata; n=100 repeats for 1 AP), but the method also significantly improved the signal-to-noise ratio of burst-associated and dendritic responses. Finally, we quantified the efficiency of our method in a “classical” behaviour experimental protocol. We simultaneously recorded multiple somata of vasopressin-expressing intemeurons (VIP) during conditioned and unconditioned stimuli. Reward induced large responses in GCamP6f-labelled neurons whose Ca2+ signals temporally overlapped with the behaviour induced motion and therefore Ca2+ transients were associated with large motion artefacts, even transients with negative amplitude could have been generated. However, our method effectively improved signal-to-noise ratio in these experiments (FIG. 2H). On the left of FIG. 2H simultaneous 3D imaging of VIP neuron somata can be seen during a classical behavior experiment where conditioned stimulus (water reward) and unconditioned stimulus (air puff, not shown) were given for two different sounds. Exemplified somatic transients are shown with (light) and without motion correction at subpixel resolution (dark). Bottom diagram shows motion amplitude. Note that motion-induced and neuronal Ca2+ transients overlap. Moreover, transients could have a negative amplitude without motion correction. On the right, signal-to-noise ratio of the transients are shown with (light) and without (dark) motion correction (mean±SEM, n=3).


Example 2: Recording of Spiny Dendritic Segments with Multiple 3D Ribbon Scanning

Recently it has been reported that for many cortical neurons, synaptic integration occurs not only at the axon initial segment but also within the apical and basal dendritic tree. Here, dendritic segments form non-linear computational subunits which also interact with each other, for example through local regenerative activities generated by non-linear voltage-gated ion channels. However, in many cases, the direct result of local dendritic computational events remains hidden in somatic recordings. Therefore, to understand computation in neuronal networks we also need novel methods for the simultaneous measurement of multiple spiny dendritic segments. Although previous studies have demonstrated the simultaneous recording of multiple dendritic segments under in vitro conditions, in vivo recording over large z-scanning ranges has remained an unsolved problem because the brain movement generated by heartbeat, breathing, or physical motion has inhibited the 3D measurement of these fine structures. Therefore, we implemented 3D ribbon scanning to simultaneously record the activity of multiple dendritic segments illustrated in FIG. 3A.


As in the 3D measurement of single dendritic segments, we took a z-stack in advance, selected guiding points in 3D along multiple dendritic segments, and fitted 3D trajectories and, finally, 3D ribbons to each of the selected dendritic segments (FIG. 3B). As above, the surface of the ribbons was set to be parallel to the average motion vector of the brain to minimize the effect of motion artifacts. We selected 12 dendritic segments from a GCaMP6f-labelled V1 pyramidal neuron for fast 3D ribbon scanning (FIG. 3B). FIG. 3B shows maximal intensity projection in the x-y and x-z plans of a GCaMP6f-labelled layer II/Ill pyramidal neuron. Numbered frames indicate the twelve 3D ribbons used to simultaneously record twelve spiny dendritic segments using 3D ribbon scanning. White frames indicate the same spiny dendritic segments but on the x-z projection.


In the next step, 3D data recorded along each ribbon were 2D projected as a function of distance perpendicular to the trajectory and along the trajectory of the given ribbon. Then, these 2D projections of the dendritic segments were ordered as a function of their length and were placed next to each other (FIG. 3C). At the top, fluorescence was recorded simultaneously along the 12 dendritic regions shown in FIG. 3B. Fluorescence data were projected into a 2D image as a function of the distance along the longitudinal and transverse directions of each ribbon, then all images were ordered next to each other. This transformation allowed the simultaneous recording, successive motion artifact elimination, and visualization of the activity of the 12 selected dendritic regions as a 2D movie. The top image is a single frame from the movie recorded at 18.4 Hz. The inset is an enlarged view of dendritic spines showing the preserved two-photon resolution. At the bottom, numbers indicate 132 ROIs: dendritic segments and spines selected from the video. Note that, in this way, all the dendritic segments are straightened and visualized in parallel. In this way we are able to transform and visualize 3D functional data in real-time as a standard video movie. The 2D projection used here allows fast motion artifact elimination and simplifies data storage, data visualization, and manual ROI selection.


Since each ribbon can be oriented differently in the 3D space, the local coordinate system of measurements varies as a function of distance along a given ribbon, and also between ribbons covering different dendritic segments. Therefore, brain motion generates artifacts with different relative directions at each ribbon, so the 2D movement correction methods used previously cannot be used for the flattened 2D movie generated from ribbons. To solve this issue, we divided the recordings of each dendritic region into short segments. Then the displacement of each 3D ribbon segment was calculated by cross-correlation, using the brightest image as a reference. Knowing the original 3D orientation of each segment, the displacement vector for each ribbon segment could be calculated. Then we calculated the median of these displacement vectors to estimate the net displacement of the recorded dendritic tree. Next, we projected back the net displacement vector to each ribbon segment to calculate the required backshift for each image of each ribbon segment for motion elimination. Finally, we repeated the algorithm separately in each and every segment to let the algorithm correct for local inhomogeneity in displacement. This allowed, for example, the depth-, and vasculature-, and distance-dependent inhomogeneities in displacement to be eliminated. Following this 3D to 2D transformation and motion artifact elimination, we were able to apply previously developed 2D methods to our 3D Ca2+ data to calculate regular Ca2+ transients from, for example, over 130 spines and dendritic regions (FIGS. 3C and D). Using our methods, we detected spontaneous and visual stimulation-induced activities (FIGS. 3D and 3F). In FIG. 3F transients were induced by moving grating stimulation. Note the variability in spatial and temporal timing of individual spines. Finally, we generated two raster plots from spine assembly patterns to demonstrate that both synchronous and asynchronous activities of dendritic spine assemblies can be recorded in behaving, moving animals (FIGS. 3E and 3G). In FIG. 3G time of moving grating stimulation in eight different directions is indicated with a grey bar.


Example 3: Multi-Layer, Multi-Frame Imaging of Neuronal Networks: Chessboard Scanning

To understand neuronal computation, it is also important to record not only assemblies of spines and dendrites, but also populations of somata. Random-access point scanning is a fast method which provides good signal-to-noise ratio for population imaging in in vitro measurements and in anesthetized mouse models; however, point scanning generates large motion artifacts during recording in awake, behaving animals for two reasons. First, the amplitude of motion artifacts is at the level of the diameter of the soma. Second, baseline and relative fluorescence is not homogeneous in space, especially when GECIs are used for labelling (FIG. 2C). Therefore, we need to detect fluorescence information not only from a single point from each soma, but also from surrounding neighbouring ROIs, in order to preserve somatic fluorescence information during movement. To achieve this, we extended each scanning point to small squares, and in other sets of measurements (see below), to small cubes. We can use the two main strategies described above to set the orientation of squares to be optimal for motion correction: namely, we can set the squares to be either parallel to the direction of motion, or to be parallel to the nominal focal plane of the objective (FIG. 4A): this second strategy will be demonstrated here. FIG. 4B is a schematic perspective view of the selected scanning regions. Neurons from a mouse in V1 region were labelled with GCaMP6f sensor. Neuronal somata and surrounding background areas (small horizontal frames) were selected according to a z-stack taken at the beginning of the measurements. Scalebars in FIG. 4B are 50 μm.


Similarly to 3D ribbon scanning, we can generate a 2D projection of the 3D data during multi-layer, multi-frame recording, even during image acquisition, by simply arranging all the squares, and hence each soma, into a “chessboard” pattern for better visualization and movie recording (this version of multi-layer, multi-frame imaging is called “chessboard” scanning. Similarly to the 3D ribbon scanning, here we calculated the average brain displacement vector as a function of time, and subtracted it from all frames to correct motion artifacts. Finally, we could select sub-regions from the 2D projection and calculate the corresponding Ca2+ transients as above (FIGS. 4C-D) and detect orientation and direction sensitive neurons with moving grating stimulation (FIG. 4E). In FIG. 4C selected frames are “transformed” into a 2D “chessboard”, where the “squares” correspond to single somata. Therefore, the activity can be recorded as a 2D movie. The image shown in FIG. 4C is a single frame from the video recording of 136 somata during visual stimulation. FIG. 4D shows representative somatic Ca2+ responses derived from the colour-coded regions in FIG. 4C following motion-artifact compensation. FIG. 4E shows raster plot of average Ca2+ responses induced with moving grating stimulation into eight different directions from the colour coded neurons shown in FIG. 4C.


Multi-layer, multi-frame scanning combines the advantage of low phototoxicity of low-power temporal oversampling (LOTOS) with the special flexibility of the 3D scanning capability of AO microscopy by allowing simultaneous imaging along multiple small frames placed in arbitrary locations in the scanning volume with speeds greater than resonant scanning.


Multi-Layer, Multi-Frame Imaging of Long Neuronal Processes


Multi-layer, multi-frame scanning can also be used to measure neuronal processes (FIG. 4F). FIG. 4F shows the schematic of the measurement. Multiple frames in different size and at any position in the scanning volume can be used to capture activities. Because the total z-scanning range with GECIs was extended to over 650 μm, we can, for example, simultaneously image apical and basal dendritic arbors of layer II/Ill or V neurons, or follow the activity of dendritic trees in this z-scanning range. To demonstrate the large dendritic imaging range, we selected a GCaMP6f-labelled layer V neuron from a sparsely labelled V1 network (FIG. 4G). FIG. 4H shows the x-z projection of the neuron shown in FIG. 4G. Visual stimulation-induced dendritic and somatic Ca2+ responses were simultaneously imaged at 30 Hz in multiple frames situated at 41 different depth levels in over a 500 μm z range in an awake animal (FIG. 4H). Colour-coded frames indicated in FIG. 4G show the position of the simultaneously imaged squares. Motion artifacts were eliminated from frames as above by subtracting the time-dependent net displacement vector providing a motion correction with subpixel resolution. Finally, we derived the Ca2+ transients for each ROI (see FIG. 4I). Transients were induced by moving gratings stimulation in time periods shown in gray.


Naturally, the multi-layer, multi-frame scanning method is not limited to a single dendrite of a single neuron, but rather we can simultaneously image many neurons with their dendritic (or axonal) arbor. FIG. 5A shows a 3D view of a layer II/III neuron labelled with the GCaMP6f sensor. Rectangles indicate four simultaneously imaged layers (ROI 1-4). Numbers indicate distances from the pia matter. Neurons were labelled in the V1 region using sparse labelling. Somata and neuronal processes of the three other GCamP6f labelled neurons situated in the same scanning volume were removed from the z-stack for clarity. We selected four layers of neuropil using a nonspecific AAV vector and recorded activity with 101 Hz simultaneously in four layers. FIG. 5B shows average baseline fluorescence in the four simultaneously measured layers shown in FIG. 5A. Numbers in the upper right corner indicates imaging depth from the pia matter. In FIG. 5C, representative Ca2+ transients were derived from the numbered yellow sub-regions shown in FIG. 5B following motion artifact elimination. Responses were induced by moving grating stimulation into three different directions at the temporal intervals indicated with gray shadows. In FIG. 5D the averaged baseline fluorescence images from FIG. 5B are shown on gray scale and were overlaid with the color-coded relative Ca2+ changes (ΔF/F). To show an alternative quantitative analysis, we also calculated Ca2+ transients (FIG. 5C) from some exemplified somatic and dendritic ROIs (FIG. 5B).


Volume Scanning with Multi-Cube and Snake Scanning


Our data demonstrated that, even though the brain moves along all three spatial dimensions, we could still preserve fluorescence information and effectively eliminate motion artifacts by scanning at reduced dimensions, along surface elements, in 3D. However, under some circumstances, for example in larger animals or depending on the surgery or behavioral protocols, the amplitude of motion can be larger and the missing third scanning dimension cannot be compensated for. To sufficiently preserve fluorescence information even in these cases, we can take back the missing scanning dimension by extending the surface elements to volume elements by using an automatic algorithm until we reach the required noise elimination efficiency for measurements. To demonstrate this in two examples, we extended 3D ribbons to folded cuboids (called “snake scanning”, FIG. 6A) and multi-frames to multi-cuboids. FIG. 6A shows schematic of the 3D measurement. 3D ribbons selected for 3D scanning can be extended to 3D volume elements (3D “snakes”) to completely involve dendritic spines, parent dendrites, and the neighbouring background volume to fully preserve fluorescence information during brain movement in awake, behaving animals. FIG. 6B is a z projection of a spiny dendritic segment of a GCaMP6f-labelled layer II/Ill neuron selected from a sparsely labelled V1 region of the cortex for snake scanning (FIG. 5B). Selected dendritic segment is shown at an enlarged scale. According to the z-stack taken at the beginning, we selected guiding points, interpolated a 3D trajectory, and generated a 3D ribbon which covered the whole segment as described above. Then we extended the ribbon to a volume, and performed 3D snake scanning from the selected folded cuboid (FIGS. 6A-D). In FIG. 6C three dimensional Ca2+ responses were induced by moving grating stimulation and were projected into 2D as a function of distances along the dendrite and along one of the perpendicular directions. Fast snake scanning was performed at 10 Hz in the selected dendritic region shown in FIG. 6B. Fluorescence data were projected as a function of the distance along the longitudinal and the transverse directions, and then data were maximal-intensity-projected along the second (and orthogonal) perpendicular axis to show average responses for three moving directions separately and, together, following motion correction. Alternatively, we could avoid maximal intensity projection to a single plane by transforming the folded snake form into a regular cube. FIG. 6D shows the same dendritic segment as in FIG. 6C, but the 3D volume is shown as x-y and z-y plane projections. Representative spontaneous Ca2+ responses were derived from the coded sub-volume elements correspond to dendritic spines and two dendritic regions. Transients were derived from the sub-volume elements following 3D motion correction at subpixel resolution. In this representation, Ca2+ transients could be calculated from different sub-volumes (FIG. 6D). Note that, due to the preserved good spatial resolution, we can clearly separate each spine from each other and from the mother dendrite in moving animals. Therefore, we can simultaneously record and separate transients from spines even when they are located in the hidden and overlapping positions which are required to precisely understand dendritic computation (FIG. 6D).



FIG. 6E shows a schematic perspective illustration of the 3D multi-cube scanning. To demonstrate the multi-cube imaging, we simply extended frames to small cubes and added a slightly larger z dimension than the sum of the z diameter of somata and the peak z movement to preserve all somatic fluorescence points during motion. FIG. 6F shows volume-rendered image of 10 representative cubes selecting individual neuronal somata for simultaneous 3D volume imaging. Simultaneous measurements of the ten GCaMP6f-labelled somata were performed from 8.2 Hz up to 25.2 Hz using relatively large cubes, the size of which was aligned to the diameter of the somata (each cube was in the range from 46×14×15 voxels to 46×32×20 voxels, where one voxel was 1.5 μm×3 μm×4 μm and 1.5 μm×1.5 μm×4 μm). This spatial and temporal resolution made it possible to resolve the sub-cellular Ca2+ dynamic. We can further increase the scanning speed or the number of recorded cells inversely with the number of 3D drifts used to generate the cubes. For example, 50 somata can be recorded with 50 Hz when using cubes made of 50×10=5 voxels. Similarly to multi-frame recordings, ROIs can be ordered next to each other for visualization (FIG. 5F). In FIG. 6G Ca2+ transients were derived from the 10 cubes shown in FIG. 6F. As above, here we calculated the net displacement vector and corrected sub-volume positions at each time point during calculation of the Ca2+ transient in order to eliminate motion. We found that the use of volume scanning reduced the amplitude of motion artifacts in Ca2+ transient by 19.28±4.19-fold during large-amplitude movements in behaving animals. These data demonstrated that multi-cube and snake scanning can effectively be used for the 3D measurement of neuronal networks and spiny dendritic segments in multiple sub-volumes distributed over the whole scanning volume. Moreover, these methods are capable of completely eliminating large-amplitude motion artifacts.


Multi-3D Line Scanning


In the previous section, we extended one-dimensional scanning points to two- or three-dimensional objects. In this section, we extend scanning points along only one dimension to perform measurements at a higher speed. We found that, in many experiments, sample movement is small, and brain motion can be approximated with a movement along a single 3D trajectory (FIG. 7B). FIG. 7A shows a schematic perspective illustration of multi-3D line scanning. Each scanning line is associated with one spine. In this case, we can extend each point of 3D random-access point scanning to only multiple short 3D lines instead of multiple surface and volume elements (FIG. 7A). In the first step we selected points from the z-stack. In the second step, we recorded brain motion to calculate the average trajectory of motion. In FIG. 7B amplitude of brain motion was recorded in 3D using three perpendicular imaging planes and a bright fluorescence object as in FIG. 2A. Average motion direction is shown in the z projection image of the motion trajectory. In the third step, we generated short 3D lines with 3D drift AO scanning to each pre-selected point in such a way that the centre of the lines coincided with the pre-selected points, and the lines were set to be parallel to the average trajectory of motion (FIGS. 7B and 7C). FIG. 7C shows z projection of a layer 2/3 pyramidal cell, labelled with GCaMP6f. We simultaneously detected the activity of 169 spines along the 3D lines (FIGS. 7C and E). White lines indicate the scanning line running through the 164 pre-selected spines. All scanning lines were set to be parallel to the average motion shown in FIG. 7B. The corresponding 3D Ca2+ responses recorded simultaneously along the 164 spines. In FIG. 7D a single raw Ca2+ response is recorded along 14 spines using Multi-3D line scanning. Note the movement artifacts in the raw fluorescence. FIG. 7E shows exemplified spine Ca2+ transients induced by moving grating stimulation in four different directions indicated at the bottom were recorded using point-by-point scanning (left) and multi-3D line scanning (right).



FIG. 7F shows selected Ca2+ transients measured using point scanning (left) and multi-3D line scanning (right). If we switched back from the multi-3D line scanning mode to the classical 3D point-by-point scanning mode, oscillations induced by heartbeat, breathing, and physical motion appeared immediately in transients (FIG. 7F). These data showed an improvement in the signal-to-noise ratio when multi-3D line scanning was used. In cases when the amplitude of the movement is small and mostly restricted to a 3D trajectory, we can effectively use multi-3D line scanning to rapidly record over 160 dendritic spines in behaving animals.


Advantage of the Different Scanning Modes


Above we presented a novel two-photon microscope technique, 3D drift AO scanning, with which we have generated six novel scanning methods: 3D ribbon scanning; chessboard scanning; multi-layer, multi-frame imaging; snake scanning; multi-cube scanning; and multi-3D line scanning shown in FIG. 8. Points, lines, surface and volume elements illustrate the ROIs selected for measurements.


Each of these scanning methods is optimal for a different neurobiological aim and can be used alone or in any combination for the 3D imaging of moving samples in large scanning volumes. Our method allows, for the first time, high-resolution 3D measurements of neuronal networks at the level of tiny neuronal processes, such as spiny dendritic segments, in awake, behaving animals, even under conditions when large-amplitude motion artifacts are generated by physical movement.


The above described novel laser scanning methods for 3D imaging using drift AO scanning methods have different application fields based on how they are suited to different brain structures and measurement speed. The fastest method is multi-3D line scanning, which is as fast as random access point-by-point scanning (up to 1000 ROIs with 53 kHz per ROI) and can be used to measure spines or somata (FIG. 8). In the second group multi-layer, multi-frame imaging, chessboard scanning, and 3D ribbon scanning can measure up to 500 ROIs with 5.3 kHz per ROI along long neuronal processes and somata. Finally, the two volume scanning methods, multi-cube scanning and snake scanning, allow measurement of up to 50-100 volume elements up to about 1.3 kHz per volume element, and are ideal for measuring somata and spiny dendritic segments, respectively. The two volume scanning methods provide the best noise elimination capability because fluorescence information can be maximally preserved. Finally, we quantified how the improved signal-to-noise ratio of the new scanning strategies improves single AP resolution from individual Ca2+ transients when a large number of neurons was simultaneously recorded in the moving brain of behaving animals. Chessboard scanning, multi-cube scanning, or multi-layer, multi-frame imaging in behaving animals improved the standard deviation of the Ca2+ responses with a factor of 14.89±1.73, 14.38±1.67, and 5.55±0.65, respectively, (n=20) as compared to 3D random-access point scanning. Therefore the standard deviation of the motion artifact corrected Ca2+ responses became smaller than the average amplitude of single APs, which made single action potential detection available in neuronal network measurements in behaving animals. This was not possible with 3D random-access AO microscopy, because the standard deviation of Ca2+ responses was 4.85±0.11-fold higher than the amplitude of single APs when the animal was running.


EXPERIMENTAL PROCEDURE

Surgical Procedure


All experimental protocols for the above described methods were carried out on mice. The surgical process was similar to that described previously (Katona et al. “Fast two-photon in vivo imaging with three-dimensional random-access scanning in large tissue volumes”, Nature methods 9:201-208; 2012); Fast two-photon in vivo imaging with three-dimensional random-access scanning in large tissue volumes. Nature methods 9:201-208) with some minor modifications, briefly: mice were anesthetized with a mixture of midazolam, fentanyl, and medetomidine (5 mg, 0.05 mg and 0.5 mg/kg body weight, respectively); the V1 region of the visual cortex was localized by intrinsic imaging (on average 0.5 mm anterior and 1.5 mm lateral to the lambda structure); a round craniotomy was made over the V1 using a dental drill, and was fully covered with a double cover glass, as described previously (see Goldey G J, Roumis D K, Glickfeld L L, Kerlin A M, Reid R C, Bonin V, Schafer D P, Andermann M L (2014); Removable cranial windows for long-term imaging in awake mice. Nature protocols 9:2515-2538). For two-photon recordings, mice were awakened from the fentanyl anesthesia with a mixture of nexodal, revetor, and flumazenil (1.2 mg, 2.5 mg, and 2.5 mg/kg body weight, respectively) and kept under calm and temperature-controlled conditions for 2-12 minutes before the experiment. Before the imaging sessions, the mice were kept head-restrained in dark under the 3D microscope for at least 1 hour to accommodate to the setup. In some of the animals, a second or third imaging session was carried out after 24 or 48 hours, respectively.


AAV Labeling


The V1 region was localized with intrinsic imaging, briefly: the skin was opened and the skull over the right hemisphere of the cortex was cleared. The intrinsic signal was recorded using the same visual stimulation protocol we used later during the two-photon imaging session. The injection procedure was performed as described previously (Chen T W, Wardill T J, Sun Y, Pulver S R, Renninger S L, Baohan A, Schreiter E R, Kerr R A, Orger M B, Jayaraman V, Looger L L, Svoboda K, Kim D S (2013); Ultrasensitive fluorescent proteins for imaging neuronal activity. Nature 499:295-300) with some modifications. A 0.5 mm hole was opened in the skull with the tip of a dental drill over the V1 cortical region (centered 1.5 mm lateral and 1.5 mm posterior to the bregma). The glass micro-pipette (tip diameter ≈10 μm) used for the injections was back-filled with 0.5 ml vector solution (≈6×1013 particles/ml) then injected slowly (20 nl/s for first 50 nl, and with 2 nl/s for the remaining quantity) into the cortex, at a depth of 400 μm under the pia. For population imaging we used AAV9.Syn.GCaMP6s.WPRE.SV40 or AAV9.Syn.Flex.GCaMP6f.WPRE.SV40 (in the case of Thy-1-Cre and VIP-Cre animals); both viruses were from Penn Vector Core, Philadelphia, Pa. For sparse labeling we injected the 1:1 mixture of AAV9.Syn.Flex.GCaMP6f.WPRE.SV40 and AAVI.hSyn.Cre.WPRE.hGH diluted 10,000 times. The cranial window was implanted 2 weeks after the injection over the injection site, as described in the surgical procedure section.


DISCUSSION

There are a number of benefits of the new 3D drift AO scanning methods in neuroscience: i) it enables a scanning volume, with GECIs more than two orders of magnitude larger than previous realizations, while the spatial resolution remains preserved; ii) it offers a method of fast 3D scanning in any direction, with an arbitrary velocity, without any sampling rate limitation; iii) it makes it possible to add surface and volume elements while keeping the high speed of the recording; iv) it compensates fast motion artifacts in 3D to preserve high spatial resolution, characteristic to two-photon microscopy, during 3D surface scanning and volume imaging even in behaving animals; and v) it enables generalization of the low-power temporal oversampling (LOTOS) strategy of 2D raster scanning in fast 3D AO measurements to reduce phototoxicity.


These technical achievements enabled the realization of the following fast 3D measurements and analysis methods in behaving, moving animals: i) simultaneous functional recording of over 150 spines; ii) fast parallel imaging of activity of over 12 spiny dendritic segments; iii) precise separation of fast signals in space and time from each individual spine (and dendritic segment) from the recorded volume, which signals overlap with the currently available methods; iv) simultaneous imaging of large parts of the dendritic arbor and neuronal networks in a z scanning range of over 650 μm; v) imaging a large network of over 100 neurons with subcellular resolution in a scanning volume of up to 500 μm×500 μm×650 μm with the signal-to-noise ratio more than an order of magnitude larger than for 3D random-access point scanning; and vi) decoding APs with over 10-fold better single AP resolution in neuronal network measurements.


The limits of understanding of neural processes lie now at the fast dendritic and neuronal activity patterns occurring in living tissue in 3D, and their integration over larger network volumes. Until now, these aspects of neural circuit function have not been measured in awake, behaving animals. Our new 3D scanning methods, with preserved high spatial and temporal resolution, provide the missing tool for these activity measurements. Among other advantages, we will be able to use these methods to investigate spike-timing-dependent plasticity and the underlying mechanisms, the origin of dendritic regenerative activities, the propagation of dendritic spikes, receptive field structures, dendritic computation between multiple spiny and aspiny dendritic segments, spatiotemporal clustering of different input assemblies, associative learning, multisensory integration, the spatial and temporal structure of the activity of spine, dendritic and somatic assemblies, and function and interaction of sparsely distributed neuronal populations, such as parvalbumin-, somatostatin-, and vasoactive intestinal polypeptide-expressing neurons. These 3D scanning methods may also provide the key to understanding synchronization processes mediated by neuronal circuitry locally and on a larger scale: these are thought to be important in the integrative functions of the nervous system or in different diseases. Importantly, these complex functional questions can be addressed with our methods at the cellular and sub-cellular level, and simultaneously at multiple spiny (or aspiny) dendritic segments, and at the neuronal network level in behaving animals.


Imaging Brain Activity During Motion


Two-dimensional in vivo recording of spine Ca2+ responses have already been realized in anaesthetized animals and even in running animals, but in these papers only a few spines were recorded with a relatively low signal-to-nose ratio. However, fast 2D and 3D imaging of large spine assemblies and spiny dendritic segments in awake, running, and behaving animals has remained a challenge. Yet this need is made clear by recent work showing that the neuronal firing rate more than doubles in most neurons during locomotion, suggesting a completely altered neuronal function in moving, behaving animals. Moreover, the majority of neuronal computation occurs in distant apical and basal dendritic segments which form complex 3D arbors in the brain. However, none of the previous 2D and 3D imaging methods have been able to provide access to these complex and thin (spiny) dendritic segments during running periods, or in different behavioral experiments, despite the fact that complex behavioral experiments are rapidly spreading in the field of neuroscience. One reason is that, in a typical behavioral experiment, motion-induced transients have similar amplitude and kinetic as behavior-related Ca2+ transients. Moreover, these transients typically appear at the same time during the tasks, making their separation difficult. Therefore, the 3D scanning methods demonstrated here, alone or in different combinations, will add new tools that have long been missing from the toolkit of neurophotonics for recording dendritic activity in behaving animals.


Compensation of Movement of the Brain


Although closed-loop motion artifact compensation, with three degrees of freedom, has already been developed at low speed (≈10 Hz), the efficiency of the method has not been demonstrated in awake animals, or in dendritic spine measurements, or at higher speeds than those characteristic of motion artefacts. Moreover, due to the complex arachnoidal suspension of the brain, and due to the fact that blood vessels generate spatially inhomogeneous pulsation in their local environment, the brain also exhibits significant deformation, not merely translational movements and, therefore, the amplitude of displacement could be different in each and every sub-region imaged. This is crucial when we measure small-amplitude somatic responses (for example single or a few AP-associated responses) or when we want to measure small structures such as dendritic spines. Fortunately, our 3D imaging and the corresponding analysis methods also allow compensation with variable amplitude and direction in each sub-region imaged, meaning that inhomogeneous displacement distributions can therefore be measured and effectively compensated in 3D.


The efficiency of our 3D scanning and motion artifact compensation methods is also validated by the fact that the standard deviation of individual somatic Ca2 transients was largely reduced (up to 14-fold), and became smaller than the amplitude of a single AP, especially when multi-cube or chessboard scanning was used. This allows single AP resolution in the moving brain of behaving animals using the currently favored GECI, GCaMP6f. The importance of providing single AP resolution for neuronal network imaging has also been validated by recent works which demonstrated that in many systems neuronal populations code information with single APs instead of bursts.


Simultaneous 3D Imaging of Apical and Basal Dendritic Arbor


Recent data have demonstrated that the apical dendritic tuft of cortical pyramidal neurons is the main target of feedback inputs, where they are amplified by local NMDA spikes to reach the distal dendritic Ca2+ and, finally, the somatic sodium integration points where they meet basal inputs also amplified by local NMDA spikes. Therefore, the majority of top-down and bottom-up input integration occurs simultaneously at local integrative computational subunits separated by distances of several hundred micrometers, which demands the simultaneous 3D imaging of neuronal processes in a several hundred micrometer z-range. The maximal, over 1000 μm z scanning range of AO microscopy, which is limited during in vivo measurements with GECIs to about 650 μm by the maximal available power of the currently available lasers, already permitted simultaneous measurement of apical and basal dendritic segments of layer II/III neurons and dendritic segments of a layer V neurons in an over 500 μm range.


Although 2D imaging in anesthetized animals can capture long neuronal processes, the location of horizontally oriented long segments is almost exclusively restricted to a few layers (for example to layer I), and in all other regions we typically see only the cross-section or short segments of obliquely or orthogonally oriented dendrites. Moreover, even in cases when we luckily capture multiple short segments with a single focal plane, it is impossible to move the imaged regions along dendrites and branch points to understand the complexity of spatiotemporal integration. The main advantage of the multi-3D ribbon and snake scanning methods is that any ROI can be flexibly selected, shifted, tilted, and aligned to the ROIs without any constraints; therefore, complex dendritic integration processes can be recorded in a spatially and temporally precise manner.


Deep Scanning


Although several great technologies have been developed for fast 3D recordings, imaging deep layer neurons is possible only by either causing mechanical injury or using single-point two-photon or three-photon excitation which allows fluorescence photons scattered from the depth to be collected. Using adaptive optics and regenerative amplifiers can improve resolution and signal-to-noise ratio at depth. Moreover, using GaAsP photomultipliers installed directly in the objective arms can itself extend the in vivo scanning range to over 800 μm. One of the main perspectives of the 3D scanning methods demonstrated here is that the main limitation to reach the maximal scanning ranges of over 1.6 mm is the relatively low laser intensity of the currently available lasers which cannot compensate for the inherent losses in the four AO deflectors. Supporting this over a 3 mm z-scanning range has already been demonstrated with 3D AO imaging in transparent samples where intensity and tissue scattering is not limiting. Therefore in the future novel, high-power lasers in combination with fast adaptive optics and new red shifted sensors may allow a much larger 3D scanning range to be utilized which will, for example, permit the measurement of the entire arbor of deep-layer neurons or 3D hippocampal imaging, without removing any parts from the cortex.


Although there are several different arrangements of passive optical elements and the four AO deflectors with which we can realize microscopes for fast 3D scanning, all of these microscopes use drift compensation with counter-propagating AO waves at the second group of deflectors, and therefore the scanning methods demonstrated here can be easily implemented in all 3D AO microscope. Moreover, at the expense of a reduced scanning volume, 3D AO microscopes could be simplified and used as an upgrade in any two-photon systems. Hence we anticipate that our new methods will open new horizons in high-resolution in vivo imaging in behaving animals.


3D Drift AO Scanning


In the following, we briefly describe how to derive a one-to-one relationship between the focal spot coordinates and speed, and the chirp parameters of the four AO deflectors to move the focal spot along any 3D line, starting at any point in the scanning volume.


In order to determine the relationship between the driver frequencies of the four AO deflectors and the x, y and z coordinates of the focal spot, we need the simplified transfer matrix model of the 3D microscope. Our 3D AO system is symmetric along the x and y coordinates, because it is based on two x and two y cylindrical lenses, which are symmetrically arranged in the x-z and y-z planes. We therefore need to calculate the transfer matrix for one plane, for example for the x-z plane. The first and second x deflectors of our 3D scanner are in a conjugated focal plane, as they are coupled with an afocal projection lens consisting of two achromatic lenses. For the simplicity, therefore, we can use them in juxtaposition during the optical calculations.


As shown in FIG. 9, in our paraxial model we use two lenses with F1 and F2 focal distances at a distance of F1+F2 (afocal projection) to image the two AO deflectors (AOD x1 and AOD x2) to the objective. Fobjective is the focal length of the objective, zx defines the distance of the focal spot from the objective lens along the z-axis, and t1 and t2 are distances between the AO deflector and the first lens of the afocal projection, and between the second lens and the objective, respectively.


The geometrical optical description of the optical system can be performed by the ABCD matrix technique. The angle (α0) and position (x0) of the output laser beam of any optical system can be calculated from the angle (α) and position (x) of the incoming laser beam using the ABCD matrix of the system (Equation S1):










(




x
0






α
0




)

=

A
*

(



x




α



)






[

Equation





1

]







The deflectors deflecting along x and y directions are also linked by optical systems that can be also modelled paraxially using the ABCD matrix system. To make difference from the optical system between scanner and sample we can denote it by small letters (a b c d). In this way we can determine for each ray passing at a coordinate x1 in the first crystal (deflecting along the x axis) the coordinate x2 and angle α2 taken in the second crystal:










(




x
2






α
2




)

=


(



a


b




c


d



)

*

(




x
1






α
1




)






[

Equation





2

]







The link between the second deflector and the sample plane is given by:










(




x
0






α
0




)

=


(



A


B




C


D



)

*

(




x
2






α

2






)






[

Equation





3

]







Here the α2′ is the angle of the ray leaving the crystal after deflection. The relation between α2 and α2′ is determined by the deflection rule of the second deflector. The simplest approximation simply gives:





α2′=α2+K*f2  [Equation 4]


where K is a proportionality factor between a relative angle deflection (α) following the acousto-optic deflector and the local acoustic frequency (f), according to the following equation:





∝=K*f


If the acousto-optic deflectors are the same, then K is the same for all four deflectors, if different deflectors are used then the deflectors are characterized by different K proportionality factors and equations for any given deflector should be calculated using the K proportionality factor of the given deflector. In the following equations same acousto-optic deflectors are considered resulting in uniform K proportionality factors, however a skilled person can readily use different K proportionality factors if the applied deflectors are different.


Applying the second matrix transform we get:






x
0(t)=A*x2+B*α2′(x2,t)=A*x2+B*(α2(x2,t)+K*f2(x2,t))  [Equation 5]


Applying the first matrix transfer, that between the two deflectors:






x
0(t)=A*(a*x1+b*α1(x1,t))+B*(c*x1+d*α1(x1,t)+K*f2(x2,t))   [Equation 6]


Applying the deflection rule of deflector 1:





α1(x1,t)=K*f1(x1,t)  [Equation 7]


we get for the targeted sample coordinates:






x
0(t)=A*(a*x1+b*K*f1(x1,t))+B*(c*x1+d*K*f1(x1,t)+K*f2(x2,t))   [Equation 8]


In the last step we eliminate x2 from the equation:






x
0(t)=A*(a*x1+b*K*f1(x1,t))+B*(c*x1+d*K*f1(x1,t)+K*f2(a*x1+b*K*f1(x1,t),t))  [Equation 9]


The x and t dependence of the frequencies in the two deflectors can be described by the equations:











f
1



(


x
1

,
t

)


=



f
1



(

0
,
0

)


+



a

x





1




(
t
)


*

(

t
-

D

2
*

v
a



-


x
1


v
a



)







[

Equation





10

]








f
2



(


x
2

,
t

)


=



f
2



(

0
,
0

)


+



a

x





2




(
t
)


*

(

t
-

D

2
*

v
a



-


x
2


v
a



)







[

Equation





11

]







With these the x0 coordinate:






x
0(t)=(A*a+B*c)*x1+(A*b*K+B*d*K)*f1(x1,t))+B*(K*f2(a*x1+b*K*f1(x1,t),t))  [Equation 12]


When substituting the frequencies:











x
0



(
t
)


=



(


A
*
a

+

B
*
c


)

*

x
1


+


(


A
*
b
*
K

+

B
*
d
*
K


)

*

(



f
1



(

0
,
0

)


+



a

x





1




(
t
)


*

(

t
-

D

2
*

v
a



-


x
1


v
a



)



)


+

B
*

(

K
*

(



f
2



(

0
,
0

)


+



a

x





2




(
t
)


*

(

t
-

D

2
*

v
a



-


(


a
*

x
1


+

b
*
K
*

(



f
1



(

0
,
0

)


+



a

x





1




(
t
)


*

(

t
-

D

2
*

v
a



-


x
1


v
a



)



)



)


(

v
a

)



)



)


)







[

Equation





13

]







we get the form of the equation that depends only on x1 and t.


Now we can collect the expressions of the coefficients of the x1 containing terms:










The





linear






x
1






term





coefficient


:









A
*
a

+

B
*
c

-






A
*
K
*
b
*


a

x





1




(
t
)



+







K
*
B
*
a
*


a

x





2




(
t
)



+

K
*
B
*


a

x





1




(
t
)


*
d






v
a


+




a

x





1




(
t
)


*


a

x





2




(
t
)


*
b
*
B
*

K
2



v
a
2







[

Equation





14

]







This can be made zero quite simply if the coefficients ax1 and ax2 do not depend on t. In this case we have a simple linear frequency sweep in both deflectors, and a drifting focal spot with constant velocity, when the parameters ax1 and ax2 fulfill the condition put by the equation:











A
*
a

+

B
*
c

-






A
*
K
*
b
*


a

x





1




(
t
)



+







K
*
B
*
a
*


a

x





2




(
t
)



+

K
*
B
*


a

x





1




(
t
)


*
d






v
a


+




a

x





1




(
t
)


*


a

x





2




(
t
)


*
b
*
B
*

K
2



v
a
2



=
0




[

Equation





15

]







The x0 coordinate will have the temporal change:

















x
0



(
t
)


=



x
0



(
0
)


+


v
x

*
t













with


:







[

Equation





16

]







v
x

=


A
*
K
*
b
*

a

x





1



+

K
*
B
*

a

x





2



+

K
*
B
*

a

x





1


*
d

-



B
*

K
2

*

a

x





1


*

a

x





2


*
b


v
a







and






[

Equation





17

]







x





0


(
0
)


=


B
*
K
*
f





2


(

0
,
0

)


+

A
*
K
*
b
*
f





1


(

0
,
0

)


+

B
*
K
*
d
*
f





1


(

0
,
0

)


-

B
*
D
*
K
*


a

x





2



2
*

v
a




-


B
*

K
2

*

a

x





2


*
b
*
f





1


(

0
,
0

)



v
a


-


A
*
D
*
K
*

a

x





1


*
b


2
*

v
a



-


B
*
D
*
K
*

a

x





1


*
d


2
*

v
a



+


B
*
D
*

K
2

*

a


.

x





1



*

a

x





2


*
b


2
*

v
a
2








[

Equation





18

]







It is possible to determine the parameters by inverting the above equations, starting from the desired vx and x0(0) values. It is however more complicated, when one wants to move the spot along curves that implies not only constant linear velocity but also acceleration. To achieve this, the ax1 and ax2 parameters must depend on t in this case. The most simple dependence is linear











a

x





1




(
t
)


=


c

x





1


+


b

x





1


*

(

t
-

D

2
*

v
a



-


x
1


v
a



)







[

Equation





19

]







and for the second deflector











a

x





2




(
t
)


=


c

x





2


+


b

x





2


*

(

t
-

D

2
*

v
a



-


x
2


v
a



)







[

Equation





20

]







Again using the relation between x2 and x1:











a
x2



(
t
)


=


c
x2

+


b

x





2


*

(

t
-

D

2
*

v
a



-



a
*

x
1


+

b
*
K
*


f
1



(


x
1

,
t

)





v
a



)







[

Equation





21

]







Substituting this into the equation of x0, we get:











x
0



(
t
)


=



(


A
*
a

+

B
*
c


)

*

x
1


+


(


A
*
b
*
K

+

B
*
d
*
K


)

*

(



f
1



(

0
,
0

)


+


(


c

x





1


+


b

x





1




(

t
-

D

2
*

v
a



-


x
1


v
a



)



)

*

(

t
-

D

2
*

v
a



-


x
1


v
a



)



)


+

B
*

(

K
*

(



f
2



(

0
,
0

)


+


(


c

x





2


+


b

x





2


*

(

t
-

D

2
*

v
a



-



a
*

x
1


+

b
*
K
*

(



f
1



(

0
,
0

)


+


(


c

x





1


+


b

x





1


*

(

t
-

D

2
*

v
a



-


x
1


v
a



)



)

*

(

t
-

D

2
*

v
a



-


x
1


v
a



)



)




v
a



)



)

*

(

t
-

D

2
*

v
a



-



a
*

x
1


+

b
*
K
*

(



f
1



(

0
,
0

)


+


(


c

x





1


+


b

x





1


*

(

t
-

D

2
*

v
a



-


x
1


v
a



)



)

*

(

t
-

D

2
*

v
a



-


x
1


v
a



)



)




v
a



)



)


)







[

Equation





22

]







Here x0 depends only on x1 and t. To obtain the compact focal spot all x1 dependent terms have to vanish. There are four terms, that have linear, quadratic, cubic and fourth power dependence, and all are depending on t, in the general case. We have to select special cases to find solutions that can be described analytically, since the general case is too complicated.


The general equation 22 can be applied to different optical setups using the particular applicable variables for the matrix elements.


In an exemplary embodiment all deflectors are optically linked by telescopes composed by different focal length lenses.


The general matrix for a telescope linking two deflectors 1 and 2, composed of two lenses—lens 1 and 2—with focal lengths f1 and f2 placed at distance t from each other, lens 1 placed at distance d1 from deflector 1 and lens 2 placed at distance d2 from deflector 2:















Equation





23








(



a


b




c


d



)

=

(




1
-


d
2


f
2


-



d
2

+

t
*

(

1
-


d
2


f
2



)




f
1










d
1

+
t
+

d
2

-



d
2

*
t


f
2


-



d
2

*

d
1



f
2


-









d
2

*

d
1



f
1


-


t
*

d
1



f
1


+



d
2

*

d
1

*
t



f
1

*

f
2












-



f
1

+

f
2

-
t



f
1

*

f
2













-

d
1


*

f
1


+


f
2

*

f
1


-

t
*








f
1

-


f
2

*

d
1


+


d
1

*
t







f
2

*

f
1






)















If in ideal case of a telescope the lenses are placed at a distance f1+f2 from each other for optical imaging, the matrix reduces to:










(



a


b




c


d



)

=

(





-

f
2



f
1






f
2

-



d
1

*

f
2



f
1


-



(


d
2

-

f
2


)

*

f
1



f
2







0




-

f
1



f
2





)





Equation





24







In the system of the mentioned reference the deflectors are all put at conjugate image planes of the intermediate telescopes. Most efficient imaging with a telescope is performed between the first focal plane of the first lens—meaning f1=d1 and the second focal plane of the second lens, f2=d2.


In this case the matrix reduces to:










(



a


b




c


d



)

=

(





-

f
2



f
1




0




0




-

f
1



f
2





)





Equation





25







If the two focal lengths are equal we get the simplest relation:










(



a


b




c


d



)

=

(




-
1



0




0



-
1




)





Equation





26







Between each deflector of the analyzed system either of the matrices from Equations 23-26 can be applied to get the appropriate matrix elements to describe equation 22. If the deflectors deflecting along the x and y axes are positioned alternately, e.g. one x is followed by one y, the telescopes linking the two x direction (x1 and x2) and y direction (y1 and y2) deflectors are described by the multiplication of the matrices describing the x1 and x2 and y1 and y2 deflectors respectively. Here we neglect the propagation through the defectors (of negligible length compared to the distances d1, f1, etc.) and consider that the y deflectors do not modify the propagation angles in the x-z plane and vice versa x directing deflectors have not influence in the y-z plane. Hence using e.g. equation 24 we get for the telescopes formed by lenses of focal lengths f1 and f2 linking the x1 and y1 deflectors and lenses of focal lengths f3 and f4 linking the y1 and x2 deflectors:










(



a


b




c


d



)

=

(






f
2


f
1


*


f
4


f
3












d
1

*

f
2
2

*

f
3
2


+


d
2

*

f
1
2

*

f
3
2


+


d
3

*

f
2
2

*








f
4
2

+


d
4

*

f
2
2

*

f
3
2


-


f
1

*

f
2
2

*

f
3
2


-








f
2

*

f
1
2

*

f
3
2


-


f
3

*

f
2
2

*

f
4
2


-


f
4

*

f
2
2

*

f
3
2








f
1

*

f
2

*

f
3

*

f
4







0





f
2


f
1


*


f
4


f
3






)





Equation





27







If the focal lengths f1=f2 and f3=f4,


we get the simplest matrix:










(



a


b




c


d



)

=

(



1


0




0


1



)





Equation





28







The optical transfer between the last deflector and the targeted sample plane will be different for the deflectors deflecting along x and y. The optical system linking the last x deflector to the sample plane contains also the telescope between x2 and y2 deflectors made of the lenses with focal lengths f3′ and f4′, the distance between deflector x2 and lens f3′ being d3′, that between lenses f3′ and f4′ being f3′+f4′, and that between f4′ and deflector y2 being d4′. The optical system between deflector y2 and the targeted sample plane consists of three lenses with focal lengths F1, F2 and Fobj, the distances between the elements being respectively: t1, F1+F2, t2, zx=zy starting from deflector y2. Hence the complete transfer between x2 and the sample plane is described by:










(




A
x




B
x






C
x




D
x




)

=


(



1



z
x





0


1



)



(



1


0






-
1


F
obj




1



)



(



1



t
2





0


1



)



(



1


0






-
1


F
2




1



)



(



1




F
1

+

F
2






0


1



)



(



1


0






-
1


F
1




1



)



(



1



t
1





0


1



)



(



1



d
4






0


1



)



(



1


0






-
1


f
4





1



)



(



1




f
3


+

f
4







0


1



)

*

(



1


0






-
1


f
3





1



)



(



1



d
3






0


1



)






Equation





29







and that between y2 and sample plane is:










(




A
y




B
y






C
y




D
y




)

=


(



1



z
y





0


1



)



(



1


0






-
1


F
obj




1



)



(



1



t
2





0


1



)



(



1


0






-
1


F
2




1



)



(



1




F
1

+

F
2






0


1



)



(



1


0






-
1


F
1




1



)



(



1



t
1





0


1



)






Equation





30







The latter can be written in closed form:










(




A
y




B
y






C
y




D
y




)

=


(






-


F
2



F
obj



F
1






(


F
obj

-

z
y


)


,





F
1

+

F
2

-


(


F
1

+

F
2


)




z
y


F
obj



-



F
2



t
1



F
1


-



F
1



t
2



F
2


-



F
1



z
y



F
2


+



z
y


F
obj




(




F
2



t
1



F
1


+



F
1



t
2



F
2



)









F
2



F
1



F
obj









F
2



t
1




F
1



F
obj



-



F
1



F
2



F
obj





(


F
2

+

F
obj

-

t
2


)


-


F
2


F
obj






)

.





Equation





31







The values a, b, c, d and A, B, C, D of the matrices can be used in equations like Equation 22 to determine temporal variations of the x0 and y0 coordinates of the focal.


In another embodiment, the deflectors are placed in the order x1-x2-y1-y2, without intermediate telescopes or lenses. The distances between the deflectors are d1,d2 and d3 respectively, starting form deflector x1. Here the thicknesses of the deflectors cannot be neglected relative to the distances between them, their optical thicknesses (refractive index times physical thickness) are denoted by tx1, tx2, ty1, ty2, respectively. The optical transfer matrix linking the deflectors x1 and x2 is:














(




a
x




b
x






c
x




d
x




)

=


(



1




tx
2

2





0


1



)



(



1



d





1





0


1



)



(



1




tx
1

2





0


1



)








=

(



1




d
1

+


tx
1

2

+


tx
2

2






0


1



)










Equation





32







and that between y1 and y2:














(




a
y




b
y






c
y




d
y




)

=


(



1




ty
2

2





0


1



)



(



1



d





3





0


1



)



(



1




ty
1

2





0


1



)








=

(



1




d
3

+


ty
1

2

+


ty
2

2






0


1



)










Equation





33







The optical system between the deflector y2 and the sample plane is the same as in the previously analyzed microscope, formed by three lenses of focal lengths F1, F2 and Fobj, placed at the same distances as before.


Therefore the ABCD matrix in the y-z plane is the multiplication of that given in Equation 31, and propagation through the half of deflector y2:










(




A
y




B
y






C
y




D
y




)

=


(






-


F
2



F
obj



F
1






(


F
obj

-

z
y


)


,








F
1

+

F
2

-


(


F
1

+

F
2


)




z
y


F
obj



-



F
2



t
1



F
1


-









F
1



t
2



F
2


-



F
1



z
y



F
2


+



z
y


F
obj




(




F
2



t
1



F
1


+



F
1



t
2



F
2



)












F
2



F
1



F
obj









F
2



t
1




F
1



F
obj



-



F
1



F
2



F
obj





(


F
2

+

F
obj

-

t
2


)


-


F
2


F
obj






)

*

(



1




ty
2

2





0


1



)






Equation





34







but the multiplicative matrix can usually be neglected, since ty2 is usually much smaller than F1, F2, etc.


The ABCD matrix in the x-z plane must take into account the propagation through the deflectors y1 and y2 and the distances between them.










(




A
y




B
y






C
y




D
y




)

=


(



1



z
y





0


1



)



(



1


0






-
1


F
obj




1



)



(



1



t
2





0


1



)



(



1


0






-
1


F
2




1



)



(



1




F
1

+

F
2






0


1



)



(



1


0






-
1


F
1




1



)



(



1



t
1





0


1



)



(



1



ty
2





0


1



)



(



1



d
3





0


1



)



(



1



ty
1





0


1



)



(



1



d
2





0


1



)



(



1




tx
2

2





0


1



)






Equation





35







These matrix elements will be asymmetric in the x-z and y-z planes, hence the parameters determining the x0 and y0 coordinates of the focal spot must be computed separately.


We realized a system—Katona et al.—that contains less elements than the microscope of Reddy et al., but uses telescope between deflectors x1, y1 and x2, y2, to avoid asymmetry appearing in the system of Tomas et al. expressed by Equations 34 and 35. The telescope between the two deflector pairs is formed by two lenses of equal focal length, placed at twice of the focal length from each other. The telescope performs perfect imaging between deflectors x1 and x2 and deflectors y1 and y2, respectively.


The thicknesses of the deflectors can be neglected compared to the focal lengths of the intermediate telescope lens and compared to the focal lengths F1, F2 and distances t1, t2.


With these approximations, assuming ideal imaging we get for the (a b c d) matrix for both deflector pairs:










(



a


b




c


d



)

=

(




-
1



0




0



-
1




)





[

Equation





36

]







The ABCD transfer matrix of the system part shown in FIG. 9 that transfers the rays from the output plane of the second deflector to the focal plane of the objective can be calculated according to Equation 31, since the optical system is the same:










(



A


B




C


D



)

=


(



1



z
x





0


1



)



(



1


0






-
1


F
obj




1



)



(



1



t
2





0


1



)



(



1


0






-
1


F
2




1



)



(



1




F
1

+

F
2






0


1



)



(



1


0






-
1


F
1




1



)



(



1



t
1





0


1



)






[

Equation





37

]







The product of the matrixes is quite complicated in its general form, it is the same as in Equation 31, but the same for both x and y coordinates, with zx=zy










(



A


B




C


D



)

=

(






-


F
2



F
obj



F
1






(


F
obj

-

z
x


)


,





F
1

+

F
2

-


(


F
1

+

F
2


)




z
x


F
obj



-



F
2



t
1



F
1


-



F
1



t
2



F
2


-



F
1



z
x



F
2


+



z
x


F
obj




(




F
2



t
1



F
1


+



F
1



t
2



F
2



)










F
2



F
1



F
obj



,







F
2



t
1




F
1



F
obj



-



F
1



F
2



F
obj





(


F
2

+

F
obj

-

t
2


)


-


F
2


F
obj






)





Equation





38







However, we can use the simplification below, considering that the afocal optical system produces the image of the deflector output plane on the aperture of the objective lens, with the ideal telescope imaging. In this case, t1=F1 and t2=F2. With this simplification we get










(



A


B




C


D



)

=

(






-


F
2



F
obj



F
1






(


F
obj

-

z
x


)


,




-



F
1



z
x



F
2










F
2



F
1



F
obj



,




-


F
1


F
2






)





[

Equation





39

]







Using this matrices in Equation 36 and 39 we can calculate the angle (α0) and coordinate (x0) of any output ray in the x-z plane at a given z distance (zx) from the objective from the angle (α) and position (x) taken in the plane of the last AO deflector. The same calculation can be used for the y-z plane. The x0 coordinate is given in general form by Equation 22, where we now insert the (abcd) matrix elements from Equation 36, and replace x1 by x, representing the x coordinate in the first deflector











x
0



(
t
)


=



(

-
A

)

*
x

-


(

B
*
K

)

*

(



f
1



(

0
,
0

)


+


(


c

x





1


+


b

x





1


*

(

t
-

D

2
*

v
a



-

x

v
a



)



)

*

(

t
-

D

2
*

v
a



-

x

v
a



)



)


+

B
*

(

K
*

(



f
2



(

0
,
0

)


+


(


c

x





2


+


b

x





2


*

(

t
-

D

2
*

v
a



+

x

v
a



)



)

*

(

t
-

D

2
*

v
a



+

x

v
a



)



)


)







[

Equation





40

]







We replace the matrix elements A and B also, from Equation 39:











x
0



(
t
)


=



(



F
2



F
obj



F
1





(


F
obj

-

z
x


)


)

*
x

+


(




F
1



z
x



F
2


*
K

)

*

(



f
1



(

0
,
0

)


+


(


c

x





1


+


b

x





1


*

(

t
-

D

2
*

v
a



-

x

v
a



)



)

*

(

t
-

D

2
*

v
a



-

x

v
a



)



)


-




F
1



z
x



F
2


*

(

K
*

(



f
2



(

0
,
0

)


+


(


c

x





2


+


b

x





2


*

(

t
-

D

2
*

v
a



+

x

v
a



)



)

*

(

t
-

D

2
*

v
a



+

x

v
a



)



)


)







[

Equation





41

]







After same transformations and simplifications we get:











x
0



(
t
)


=



-


F
2



F
obj



F
1






(


F
obj

-

z
x


)

*
x

-




F
1



z
x



F
2


*
K
*

(



f
1



(

0
,
0

)


-


f
2



(

0
,
0

)


+


b

x





1


*


(

t
-

D

2


v
a



-

x

v
a



)

2


-


b

x





2


*


(

t
-

D

2


v
a



+

x

v
a



)

2


+


c

x





1


*

(

t
-

D

2


v
a



-

x

v
a



)


-


c

x





2


*

(

t
-

D

2
*

v
a



+

x

v
a



)



)







[

Equation





42

]







Expanding the terms in brackets, we get separate x- and t-dependent parts:











x
0



(
t
)


=



-



F
1



z
x



F
2



*
K
*

(


b

x





1


-

b

x





2



)

*


(

t
-

D

2


v
a




)

2


-




F
1



z
x



F
2


*
K
*

(


b

x





1


-

b

x





2



)

*


(

x

v
a


)

2


+


(



-


F
2



F
obj



F
1






(


F
obj

-

z
x


)


-




F
1



z
x



F
2


*
K
*

[



-
2

*

(


b

x





1


+

b

x





2



)

*

(

t
-

D

2


v
a




)

*

1

v
a



-


(


c

x





1


+

c

x





2



)

*

1

v
a




]



)

*
x

+




F
1



z
x



F
2


*
K
*

(



f
1



(

0
,
0

)


-


f
2



(

0
,
0

)


+


(


c

x





1


-

c

x





2



)

*

(

t
-

D

2


v
a




)



)







[

Equation





43

]







To provide ideal focusing, in a first assumption, the time-dependent and -independent terms in the x-dependent part of the x0 coordinate should vanish separately for all t values. To have the beam focused, the terms containing x2 and x must vanish for any x value. This implies two equations instead of only one:












-


F
2



F
obj



F
1






(


F
obj

-

z
x


)


+

K








F
1



z
x



F
2






c

x





1


+

c

x





2




v
a



+

2
*
K








F
1



z
x



F
2






b

x





1


+

b

x





2




v
a


*

(

t
-

D

2


v
a




)



=

0





and


:






[

Equation





44

]












K








F
1



z
x



F
2






b

x





1


-

b

x





2




v
a
2



=
0





[

Equation





45

]







The second implies that bx1=bx2=bx. This also implies that the first term on the right side in Equation 43, the single that contains the term depending on t2, vanishes. Hence we have an x0 coordinate moving with constant velocity. If this happens at constant z, which is not time dependent, and bx1=bx2=0, we get back to the simple linear temporal slope of the acoustic frequencies.


From Equation 44 we can express the time-dependence of the z coordinate:











z
x



(
t
)


=



F
2


F
1





F
2



F
obj



F
1



+

K









F
1



c

x





1



+

c

x





2





F
2







v
a




+

4
*
K








F
1



b
x




F
2



v
a



*

(

t
-

D

2


v
a




)








[

Equation





46

]







We will treat separately the cases when the zx coordinate is constant, hence the focal spot drifts within the horizontal x-y plane (see below example I); and when the spot moves along arbitrary 3D lines possibly following the axes of the structures that are measured—e.g. axons, dendrites, etc. (example II).


Example I: The zx Coordinate does not Depend on Time

In this case, bx1=bx2=0 as we can see from Equations 45 and 46. (above).


From Equation 46, we also see that the focal plane is constant:










z
x

=



F
2


F
1





F
2



F
obj



F
1



+

K









F
1



c

x





1



+

c

x





2





F
2







v
a










[

Equation





47

]







If we set a desired zx plane, we get for the following relationship between the required cx1 and cx2 parameters:











c

x





1


+

c

x





2



=


v
a




F
2
2


K
*

z
x

*

F
obj



F
1
2





(


F
obj

-

z
x


)






[

Equation





48

]







The temporal variation of the x0 coordinate in this case is given by:











x
0



(
t
)


=


-



F
1



z
x



F
2



*
K
*

(



f
1



(

0
,
0

)


-


f
2



(

0
,
0

)


+


(


c

x





1


-

c

x





2



)

*

(

t
-

D

2


v
a




)



)






[

Equation





49

]







If we replace zx with its expression from Equation 47, we get for the x0 coordinate:











x
0



(
t
)


=


-


F
1


F
2



*



F
2


F
1





F
2



F
obj



F
1



+

K









F
1



c

x





1



+

c

x





2





F
2







v
a






*
K
*

(



f
1



(

0
,
0

)


-


f
2



(

0
,
0

)


+


(


c

x





1


-

c

x





2



)

*

(

t
-

D

2






v
a




)



)






[

Equation





50

]







after simplification to:











x
0



(
t
)


=


-

K



F
2



F
obj



F
1



+

K









F
1



c

x





1



+

c

x





2





F
2







v
a







*

(



f
1



(

0
,
0

)


-


f
2



(

0
,
0

)


+


(


c

x





1


-

c

x





2



)

*

(

t
-

D

2


v
a




)



)






[

Equation





51

]







We express the initial velocity and acceleration of the focal spot along the x0 coordinate:










v

x





0


=


-

K



F
2



F
obj



F
1



+

K









F
1



c

x





1



+

c

x





2





F
2







v
a







*

(

(


c

x





1


-

c

x





2



)

)






[

Equation





52

]







further simplified:










v

x





0


=


-


K
*

z
x

*

F
1



F
2



*

(

(


c

x





1


-

c

x





2



)

)






and


:






[

Equation





53

]







a

x





0


=



-


4

K




F
2



F
obj



F
1



+

K









F
1



c

x





1



+

c

x





2





F
2







v
a







*

(

b

x





1


)


=
0





[

Equation





54

]







The last equation shows that in the x-z plane the focal spot cannot be accelerated; it drifts with constant velocity vx0, which is the same for the duration of the frequency chirp's. When we want to calculate the values of the required frequency slopes to get a moving focal point characterized by the following parameters: starting x coordinate x0, distance from the objective zx, velocity along the x axis vx0, we need to use the expression for cx1+cx2 (Equation 48) and cx1-cx2 (Equation 53).


For cx1 and cx2 we get:











c

x





1


-

c

x





2



=



-

F
2



K
*

F
1

*

z
x





(

v

x





0


)






[

Equation





55

]








c

x





1


+

c

x





2



=


v
a




F
2
2


K
*

z
x

*

F
obj

*

F
1
2





(


F
obj

-

z
x


)






[

Equation





56

]







Adding and subtracting the above two equations, we get the results:










c

x





1


=



-

F
2



2
*
K
*

F
1

*

z
x



*

(


v

x





0


-




v
a

*

F
2




F
1

*

F
obj





(


F
obj

-

z
x


)



)






[

Equation





57

]







c

x





2


=



f
2


2
*
K
*

F
1

*

z
x



*

(


v

x





0


+




v
a

*

F
2




F
1

*

F
obj





(


F
obj

-

z
x


)



)






[

Equation





58

]







In summary, we can say that it is possible to drift the focal spot at a constant velocity along lines lying in horizontal planes (perpendicular to the objective axis); the focal distance zx can be set by the acoustic frequency chirps in the AO deflectors. The ranges of zx and vx0 available cannot be deduced from this analysis, they are limited by the frequency bandwidths of the AO devices that limit the temporal length of the chirp sequences of a given slope.


Example II: The zx Coordinate Depends on Time

If we want to drift the spot in the sample space along the z axis within one AO switching time period, we have to allow for temporal change of the zx coordinate. The formula:











z
x



(
t
)


=



F
2


F
1








F
2



F
obj



F
1



+

K









F
1



c

x





1



+

c

x





2





F
2







v
a




+






2

K









F
1



b

x





1



+

b

x





2





F
2







v
a



*

(

t
-

D

2


v
a




)










[

Equation





59

]







comes from the constraint to focus all rays emerging from the AO cells onto a single focal spot after the objective (see Equation 47 for the time-independent zx).


From Equation 59 we get:












-


F
2



F
obj



F
1






(


F
obj

-

z
x


)


+

K




F
1



z
x



F
2










c

x





1


+

c

x





2









v
a




+

2
*
K








F
1



z
x



F
2






b

x





1


+

b

x





2




v
a


*

(

t
-

D

2


v
a




)



=
0




[

Equation





60

]







hence:











z
x



(
t
)


=



F
2


F
1








F
2



F
obj



F
1



+

K









F
1



c

x





1



+

c

x





2





F
2







v
a




+






2
*
K









F
1



b

x





1



+

b

x





2





F
2







v
a



*

(

t
-

D

2


v
a




)










[

Equation





61

]







This equation has, however, a non-linear temporal dependence. Therefore, we need its Taylor series to simplify further calculations:











z
x



(
t
)


=




F
2


F
1








F
2



F
obj



F
1



+

K







F
1


F
2


*



c

x





1


+

c

x





2




v
a



-






2
*
K



F
1


F
2






b

x





1


+

b


x





2









v
a


*

(

D

2


v
a



)






+




-
2


K




b

x





1


+

b

x





2




v
a







(



F
2



F
obj



F
1



+

K







F
1


F
2


*



c

x





1


+

c

x





2




v
a



-









2
*
K



F
1


F
2






b

x





1


+

b


x





2









v
a


*

(

D

2


v
a



)


)

2





*
t

+





(

2

K




b

x





1


+

b

x





2




v
a



)

2

*


F
1


F
2







(



F
2



F
obj



F
1



+

K







F
1


F
2


*



c

x





1


+

c

x





2




v
a



-









2
*
K



F
1


F
2






b

x





1


+

b


x





2









v
a


*

(

D

2


v
a



)


)

3





*

t
2


+







[

Equation





26

]












To have a nearly constant velocity, the second and higher order terms in the Taylor series should be small, or nearly vanish: this imposes constraints on the bx1, bx2, cx1, and cx2 values. Our simplest presumption is that the linear part will dominate time dependence over the quadratic part, which means that the ratio of their coefficients should be small:












(

K




b

x





1


+

b

x





2




v
a



)

*


F
1


F
2







(



F
2



F
obj



F
1



+

K



F
1


F
2


*



c

x





1


+

c

x





2




v
a



-

2
*









K



F
1


F
2






b

x





1


+

b

x





2




v
a


*

(

D

2

v


)


)






1




[

Equation





63

]







However, the second member in the sum, the velocity along the z axis in the z-x plane (vzx), is also similarly expressed:










v
zx

=


(


-
2


K




b

x





1


+

b

x





2




v
a



)





(



F
2



F
obj



F
1



+

K



F
1


F
2


*



c

x





1


+

c

x





2




v
a



-

2
*










K



F
1


F
2






b

x





1


+

b

x





2




v
a


*

(

D

2


v
a



)


)

2









[

Equation





64

]







From Equation 35 we have bx1=bx2=bx, and this is not zero in this case. We need other constraints to express bx, and further constants.


The formula for the x0 coordinate (from Equation 43) is:














x
0



(
t
)


=






-


F
2



F
obj



F
1






(


F
obj

-


z
x



(
t
)



)

*








x
-















F
1




z
x



(
t
)




F
2


*









a


(

x
,
t

)



=















-














F
2



F
obj



F
1









(


F
obj

-







F
2


F
1








F
2



F
obj



F
1



+

K



F
1


F
2






c

x





1


+

c

x





2




v
a



+

2
*







K



F
1


F
2






b

x





1


+

b
x2



v
a


*

(

t
-

D

2


v
a




)







)





*








x
-




F
2


F
1








F
2



F
obj



F
1



+

K



F
1


F
2






c

x





1


+

c

x





2




v
a



+

2
*







K



F
1


F
2






b

x





1


+

b

x





2




v
a


*

(

t
-

D

2


v
a




)






*


F
1


F
2


*
K
*

(



f

x





1




(

0
,
0

)


-


f

x





2




(

0
,
0

)


+


(

b

x





1


)

*


(

t
-

D

2


v
a



-

x

v
a



)

2


-


(

b

x





2


)

*


(

t
-

D

2


v
a



+

x

v
a



)

2


+


(

c

x





1


)

*





(

t
-

D

2


v
a



-

x

v
a



)


-


c

x





2


*

(

t
-

D

2


v
a



+

x

v
a



)



)



=






-





1






F
2



F
obj



F
1



+

K



F
1


F
2






c

x





1


+

c

x





2




v
a



+

2
*







K



F
1


F
2






b

x





1


+

b

x





2




v
a


*

(

t
-

D

2


v
a




)











*



K
*







[



f

x





1




(

0
,
0

)


-


f

x





2




(

0
,
0

)


+


(

t
-

D

2


v
a




)

*

(


c

x





1


-

c

x





2



)



]










[

Equation





65

]







To find the drift velocity along the x axis we should differentiate the above function with respect to t:











v
x



(
t
)


=




dx
0



(
t
)


dt

=


+


2
*
K



F
1


F
2






b

x





1


+

b

x





2




v
a







(



F
2



F
obj



F
1



+

K



F
1


F
1






c

x





1


+

c

x





2




v
a



+

2
*









K



F
1


F
2






b

x





1


+

b

x





2




v
a


*

(

t
-

D

2


v
a




)


)






*
K
*




[



f

x





1




(

0
,
0

)


-


f

x





2




(

0
,
0

)


+


(

t
-

D

2


v
a




)

*

(


c

x





1


-

c

x





2



)



]

-


1






F
2



F
obj



F
1



+

K



F
1


F
2






c

x





1


+

c

x





2




v
a



+

2
*







K



F
1


F
2






b

x





1


+

b

x





2




v
a


*

(

t
-

D

2


v
a




)






*
K
*

[

(


c

x





1


-

c

x





2



)

]










[

Equation





66

]







Taken at t=0, we can determinate the initial value vx0 of the drift velocity component along the x axis:










v

x





0


=


+


2
*
K



F
1


F
2






b

x





1


+

b

x





2




v
a







(



F
2



F
obj



F
1



+

K



F
1


F
2






c

x





1


+

c

x





2




v
a



+

2
*










K



F
1


F
2






b

x





1


+

b

x





2




v
a


*

(

D

2


v
a



)


)

2






*
K
*




[



f

1

x




(

0
,
0

)


-


f

2

x




(

0
,
0

)


+


(

-

D

2


v
a




)

*

(


c

x





1


-

c

x





2



)



]

-


1






F
2



F
obj



F
1



+

K



F
1


F
2






c

x





1


+

c

x





2




v
a



+

2
*







K



F
1


F
2






b

x





1


+

b

x





2




v
a


*

(

-

D

2


v
a




)






*
K
*

[

(


c

x





1


-

c

x





2



)

]









[

Equation





67

]







If we take bx from the expression of vzx (Equation 64), and introduce it into Equation 67 we will have an equation (Equation 68) that gives a constraint for the choice of cx1 and cx2. This constraint relates cx1 and cx2 to vx0 and vzx:











K
*

(


v
x

-

r
*
Δ






f

0

x


*

v
zx



)

*

(

1
-



v
zx

*
D



v
a

*

F
obj




)

*
q

=




F
2



F
obj



F
1



*


(


v
x

-

r
*
Δ






f

0

x


*

v
zx



)

2


+




v
zx

*
D
*

r
2



v
a
2


*

(


v
x

-

r
*
Δ






f

0

x


*

v
zx



)

*
p
*
q

+


r

v
a


*


(


v
x

-

r
*
Δ






f

0

x


*

v
zx



)

2

*
p

+




v
zx
2

*

D
2

*

r
3



4
*

v
a
3



*
p
*

q
2


+



v
zx
2

*

D
2

*
r
*
K


4
*

v
a
2

*

F
obj





,

*

q
2






[

Equation





68

]







Here we introduced the following notations:









r
:=

K



F
1


F
2







[

Equation





69

]







Δ






f
0



:

=


f

1

x




(

0
,
0

)



-


f

2

x




(

0
,
0

)






[

Equation





70

]







q


:

=

c

x





1



-

c

x





2






[

Equation





71

]







p


:

=

c

x





1



+

c

x





2






[

Equation





72

]







We can express p from Equation 68, resulting in a relationship between p and q:









p
=







(

1
-


D
*

v
zx




v
a

*

F
obj




)

*
K
*
H
*
q

-



F
2



F
1

*

F
obj



*








H
2

-




v
zx
2

*

D
2

*
r
*
K



4
*

v
a
2


+

F
obj



*

q
2








r

v
a


*


(




v
zx

*
D
*
r
*
q


2
*

v
a



+
H

)

2







[

Equation





73

]







where we introduced the notation:






H:=v
x
−r*Δf
0x
*v
zx  [Equation 74]


These are general equations that apply to all possible trajectories. Practically, we can analyze the motion of the spot along different trajectories separately.


Motion in Space Along 3D Lines


A practically important possibility would be to set a linear trajectory for the drifting spot, following e.g. the axis of a measured dendrite or axon. This is a general 3D line, with arbitrary angles relative to the axes. The projections of this 3D line onto the x-z and y-z planes are also lines that can be treated separately. We are dealing now with the projection on the x-z plane. The projection on the y-z plane can be handled similarly; they are however not completely independent, as will be shown later. If the spot is accelerated on the trajectory, the acceleration and initial velocity are also projected on the x-z and y-z planes. We name the two orthogonal components of the initial velocity in the x-z plane as vx0 and vzx0 which are parallel to the x and z axis, respectively. Therefore, in the x-z plane we have for the projection of the line trajectory:










z


(
t
)


=




v

zx





0



v

x





0



*


x
0



(
t
)



+
n





[

Equation





75

]







To calculate the chirp parameters we must insert the temporal dependence of the z(t) and x0(t) functions, expressed in the Equations 62 and 65, respectively.


We introduce the following notations:










u
~

:=



F
2



F
obj



F
1



+

K



F
1


F
2






c

x





1


+

c

x





2




v
a








[

Equation





76

]






B
=


-
2

*
K



F
1


F
2






b

x





1


+

b

x





2




v
a







[

Equation





77

]







t


=

t
-

D

2


v
a








[

Equation





78

]






M
=


F
2


F
1






[

Equation





79

]







Introducing these notations and the temporal dependences from Equations 62 and 65 into Equation 75, we get the projection of the 3D line:










M


u
~

-

B
*

t





=



-


v

zx





0



v

x





0




*

K


u
~

-

B
*

t







(


q
*

t



+

Δ






f

0

x




)


+
n





[

Equation





80

]







After some simplification we get:









M
=



-


v

zx





0



v

x





0




*
K
*
Δ






f

0

x



+

n
*

u
~


-


(




v

zx





0



v

x





0



*
K
*
q

+

n
*
B


)

*

t








[

Equation





81

]







This equation must be fulfilled for each time point t′. To be valid for each t′, we must impose the following:










M
+



v

zx





0



v

x





0



*
K
*
Δ






f

0

x



-

n
*

u
~



=
0




[

Equation





82

]






and


:
















v

zx





0



v

x





0



*
K
*
q

+

n
*
B


=
0




[

Equation





83

]







The first equation (Equation 82) gives:










u
~

=


M
+



v

zx





0



v

x





0



*
K
*
Δ






f

0

x




n





[

Equation





84

]







Introducing ũ from Equation 76:












F
2



F
obj



F
1



+

K



F
1


F
2






c

x





1


+

c

x





2




v
a




=


M
+



v

zx





0



v

x





0



*
K
*
Δ






f

0

x




n





[

Equation





85

]







From this equation we can express p (defined by Equation 72) as follows:









p
=



c

x





1


+

c

x





2



=




v
a

*
M

K

*

(



M
+



v

zx





0



v

x





0



*
K
*
Δ






f

0

x




n

-

M

F
obj



)







[

Equation





86

]







To express bx1=bx2=b and q=cx1−cx2, we need another constraint, that can be set from the desired value of the initial velocity vzx0.


We take the derivative of z(t) (Equation 62) at t=0, to find the initial velocity value, using the notations in Equations 76 and 77:










v

zx





0


:=



v

zx








(
0
)


=

-


B
*
M



u
~

2








[

Equation





87

]







Expressing B from Equation S58:









B
=


-


v

zx





0


M


*


(


M
+



v

zx





0



v

x





0



*
K
*
Δ






f

0

x




n

)

2






[

Equation





88

]







Introducing the expression of B from Equation 77, we can yield the parameter b:









b
=




v

zx





0


*

v
a



4
*
K


*


(


M
+



v

zx





0



v

x





0



*
K
*
Δ






f

0

x




n

)

2






[

Equation





89

]







To express q (defined by Equation 71) we use Equations 83 and 88:









q
=



c

x





1


-

c

x





2



=

n
*


v

x





0



M
*
K


*


(


M
+



v

zx





0



v

x





0



*
K
*
Δ






f

0

x




n

)

2







[

Equation





90

]







Finally, we can express cx1 and cx2 by adding and subtracting q and p (Equations 86 and 90):











c

x





1


=




M
*

v
a



2
*
K


*

(



M
+



v

zx





0



v

x





0



*
K
*
Δ






f

0

x




n

-

M

F
obj



)


+




v

x





0


*
n


2
*
K
*
M


*


(


M
+



v

zx





0



v

x





0



*
K
*
Δ






f

0

x




n

)

2













and




[

Equation





91

]







c

x





2


=




M
*

v
a



2
*
K


*

(



M
+



v

zx





0



v

x





0



*
K
*
Δ






f

0

x




n

-

M

F
obj



)


-




v

x





0


*
n


2
*
K
*
M


*


(


M
+



v

zx





0



v

x





0



*
K
*
Δ






f

0

x




n

)

2







[

Equation





92

]







The crucial parameter Δf0x can be calculated from the initially set x0(0) at t′=0. We then have:










Δ






f

0

x



=




x
0



(
0
)


*

F
2



K
*

F
obj

*

F
1







[

Equation





93

]







In a preferred embodiment the characteristic parameters of the AO devices are: K=0.002 rad/MHz, v=650*106 μm/s, the magnification M=1 of the lens system following the acousto-optic deflectors, the initial frequency difference Δf=10 MHz, and the movement parameters: m=2, vz0=1 μm/μs, n=fobjective−4 μm. For these values, the cx1 value results in 3 kHz/μs, whereas cx2=17 kHz/s.


The acceleration azx in the z direction is approximately 0.1 m/s2 with these parameters.


Finally, we summarize our results. Here we demonstrate how it is possible to calculate the parameters for the non-linear chirped driver function, in order to move the focal spot from a given point with a given initial speed along a line path in the x-z plane. The parameters of the line path are selected according to the general formula, in 3D:






x
0
=x
0(0)+s*vx0






y
0
=y
0(0)+s*vy0






z
0
=z
0(0)+s*vz0  [Equation 94]


Since the deflectors are deflecting in the x-z and y-z planes, transforming Equation S65 into the equations describe the line projections on these planes:










z
0

=



m
*

x
0


+
n

=



z
0



(
0
)


+



v

zx





0



v

x





0



*

x
0


-



v

zx





0



v

x





0



*


x
0



(
0
)









[

Equation





95

a

]







z
0

=



k
*

x
0


+
l

=



z
0



(
0
)


+



v

zy





0



v

y





0



*

y
0


-



v

zy





0



v

y





0



*


y
0



(
0
)









[

Equation





95

b

]







With these, we imply that vzx0=vzy0=vz0, and:









m
=


v

zx





0



v

x





0







[

Equation





96

]






k
=


v

zy





0



v

y





0







[

Equation





97

]






n
=



z
0



(
0
)


-



v

zx





0



v

x





0



*


x
0



(
0
)








[

Equation





98

]






l
=



z
0



(
0
)


-



v

zx





0



v

y





0



*


y
0



(
0
)








[

Equation





99

]







To steer the deflectors, we need to determine the Δf0x, bx1, bx2, cx1, and cx2 parameters in the x-z plane as a function of the selected x0(0), z0(0), vx0, and vzx0 parameters of the trajectory and drift. The same is valid for the y-z plane: here we determine Δf0y, by1, by2, cy1, and cy2 for the desired y0(0), z0(0), vy0, and vzy0 of the trajectory.


The spot will then keep its shape during the drift, since the corresponding constraint is fulfilled in both planes. The initial velocities vx0 and vy0 along the x and y coordinates determine the m and k parameters, together with the initial velocity vzx=vzy set for z (Equations 96 and 97) and the acceleration values are also determined by these parameters. The resulting acceleration values are usually low within the practical parameter sets, therefore the velocity of the spot will not change drastically for trajectories which are not too long.


For the optical calculation we use a paraxial approximation of the whole AO microscope applied in two perpendicular planes whose orientations are set by the deflection directions of the AO deflectors (FIG. 9). We need the following three groups of equations: i) the simplified matrix equation of the AO microscope in the x-z (and y-z) planes (Equations 2-3); ii) the basic equation of the AO deflectors (Equation 7); and iii) temporally non-linear chirps for the acoustic frequencies in the deflectors deflecting in the x-z (and y-z) plane (f):











f
i



(

x
,
t

)


=



f
i



(

0
,
0

)


+


(



b
xi

*

(

t
-


D

2
*

v
a



±

x

v
a




)


+

c
xi


)

*

(

t
-


D

2
*

v
a



±

x

v
a




)







[

Equation





100

]







(where i=1 or i=2, indicates the first and second x axis deflector D the diameter of the AO deflector; and va is the propagation speed of the acoustic wave within the deflector)


This equation was derived from Equations 10, 11, 19 and 20. In this paragraph we calculate everything in the x-z plane, the x axis being the deflection direction of one AO deflector pair (y being that of the other) and z is the optical axis coinciding with the symmetry axis of the cylindrical objective. The same calculation should be applied in the y-z plane, too (see the detailed calculation above). From these three groups of equations (i-iii) we can calculate the x0 coordinate of the focal spot (Equations 22,65). To have all rays focused in the focal point of the objective, the x and x2-dependent parts of the x0 coordinate must vanish (all rays starting at any x coordinate in the deflector aperture must pass through the same x0 coordinate in the focal plane), which implies two equations (Equations 44, 48), from which we can express the t dependence of the z coordinate (Equation 61).


Equation 61 has, however, a non-linear temporal dependence. Therefore, we need its Taylor series to simplify further calculations. Our simplest presumption was that for the linear part time dependence will dominate over the quadratic part; therefore, the velocity along the z axis in the z-x plane is nearly constant (vzx) and, using Equation 64, the velocity along the x axis (vx) can be determined (see Equation 66).


In the last step we want to analyze the motion of the focal spot along different 3D trajectories. For simplicity, we calculate the drift along a general 3D line with an arbitrary velocity and an arbitrary angle relative to the axes. The x-z and y-z planes can be treated separately as above. In the x-z plane we can express the projection of the 3D line as:











z
x



(
t
)


=




v

zx





0



v

x





0



*


x
0



(
t
)



+

z
0

-



v

zx





0



v

x





0



*


x
0



(
0
)








[

Equation





95

a

]







When we combine the expression zx(t) with x0(t), the similarly calculated zy(t), and y0(t), and add all the required initial positions (x0, y0, z0) and speed parameter values (vx0, vy0, vzx0=vzy0) of the focal spot, we can explicit all the parameters required to calculate the non-linear chirps according to Equation 100 in the four AO deflectors (Δf0x, bx1, bx2, cx1, cx2 and Δf0y, by1, by2, cy1, cy2):

















Δ






f

0

x



=



f

1





x




(

0
,
0

)


-


f

2





x




(

0
,
0

)




)


0












Δ






f

0

x



=




x
0



(
0
)


*

F
2



K
*

F
obj

*

F
1















b

x





1


=




v

zx





0


*

v
a



4
*
K


*


(


M
+



v

zx





0



v

x





0



*




x
0



(
0
)


*

F
2




F
obj

*

F
1








z
0



(
0
)


-



v

zx





0



v

x





0



*


x
0



(
0
)





)

2














b

x





2


=




v

zx





0


*

v
a



4
*
K


*


(


M
+



v

zx





0



v

x





0



*




x
0



(
0
)


*

F
2




F
obj

*

F
1








z
0



(
0
)


-



v

zx





0



v

x





0



*


x
0



(
0
)





)

2










c

x





1


=




M
*

v
a



2
*
K


*

(



M
+



v

zx





0



v

x





0



*




x
0



(
0
)


*

F
2




F
obj

*

F
1








z
0



(
0
)


-



v

zx





0



v

x





0



*


x
0



(
0
)





-

M

F
obj



)


+



v

x





0



2
*
K
*
M


*

(



z
0



(
0
)


-



v

zx





0



v

x





0



*


x
0



(
0
)




)

*


(


M
+



v

zx





0



v

x





0



*




x
0



(
0
)


*

F
2




F
obj

*

F
1








z
0



(
0
)


-



v

zx





0



v

x





0



*


x
0



(
0
)





)

2












c

x





2


=




M
*

v
a



2
*
K


*

(



M
+



v

zx





0



v

x





0



*




x
0



(
0
)


*

F
2




F
obj

*

F
1








z
0



(
0
)


-



v

zx





0



v

x





0



*


x
0



(
0
)





-

M

F
obj



)


-




v

x





0


*

(



z
0



(
0
)


-



v

zx





0



v

x





0



*


x
0



(
0
)




)



2
*
K
*
M


*


(


M
+



v

zx





0



v

x





0



*




x
0



(
0
)


*

F
2




F
obj

*

F
1








z
0



(
0
)


-



v

zx





0



v

x





0



*


x
0



(
0
)





)

2















Δ






f

0

y



=



f

1





y




(

0
,
0

)


-


f

2





y




(

0
,
0

)





)


0












Δ






f

0

y



=




y
0



(
0
)


*

F
2



K
*

F
obj

*

F
1
















b

y





1


=




v

zy





0


*

v
a



4
*
K


*


(


M
+



v

zy





0



v

y





0



*




y
0



(
0
)


*

F
2




F
obj

*

F
1








z
0



(
0
)


-



v

zy





0



v

y





0



*


y
0



(
0
)





)

2















b

y





2


=




v

zy





0


*

v
a



4
*
K


*


(


M
+



v

zy





0



v

y





0



*




y
0



(
0
)


*

F
2




F
obj

*

F
1








z
0



(
0
)


-



v

zy





0



v

y





0



*


y
0



(
0
)





)

2










c

y





1


=




M
*

v
a



2
*
K


*

(



M
+



v

zy





0



v

y





0



*




y
0



(
0
)


*

F
2




F
obj

*

F
1








z
0



(
0
)


-



v

zy





0



v

y





0



*


y
0



(
0
)





-

M

f
obj



)


+




v

y





0


*

(



z
0



(
0
)


-



v

zy





0



v

y





0



*


y
0



(
0
)




)



2
*
K
*
M


*


(


M
+



v

zy





0



v

y





0



*




y
0



(
0
)


*

F
2




F
obj

*

F
1








z
0



(
0
)


-



v

zy





0



v

y





0



*


y
0



(
0
)





)

2










c

y





2


=




M
*

v
a



2
*
K


*

(



M
+



v

zy





0



v

y





0



*




y
0



(
0
)


*

F
2




F
obj

*

F
1








z
0



(
0
)


-



v

zy





0



v

y





0



*


y
0



(
0
)





-

M

f
obj



)


-




v

y





0


*

(



z
0



(
0
)


-



v

zy





0



v

y





0



*


y
0



(
0
)




)



2
*
K
*
M


*


(


M
+



v

zy





0



v

y





0



*




y
0



(
0
)


*

F
2




F
obj

*

F
1








z
0



(
0
)


-



v

zy





0



v

y





0



*


y
0



(
0
)





)

2







Note that Δf0x, and Δf0y are not fully determined; here we have an extra freedom to select from frequency ranges of the first (f1) and second (f2) group of AO deflectors to keep them in the middle of the bandwidth during 3D scanning. In summary, we were able to derive a one-to-one relationship between the focal spot coordinates and speed and the chirp parameters of the AO deflectors. Therefore, we can generate fast movement along any 3D line, starting at any point in the scanning volume.


3D Two-Photon Microscope


In the following exemplary embodiment, we improved 3D AO imaging method by using a novel AO signal synthesis card implemented in the electronics system used earlier. The new card uses a high speed DA chip (AD9739A) fed with FPGA (Xilinx Spartan-6). The card at its current state allows the generation of 10-140 MHz signals of varying amplitude with frequency chirps implementing linear and quadratic temporal dependence. Synchronizing and commanding the cards allowed us to arbitrarily place the focal spot and let it drift along any 3D line for every (10-35 μs) AO cycle. We measured the back reflection of the radio frequency (RF) driver signal at each of the AO deflectors directly, and compensated for the RF reflection and loss to distribute RF energy more homogeneously between deflectors. This allowed higher absolute acoustic energy on the crystals, providing higher AO efficiency, and thus higher laser output under the objective and more homogeneous illumination of the scanning volume.


We also implemented the following opto-mechanical modifications to improve spatial resolution, extend field of view, and increase total transmitted light intensity. We removed the DeepSee unit of our Mai Tai eHP femtosecond laser (875-880 nm, SpectraPhysics) and used only a motorized external four-prism compressor to compensate for most of the second- and third-order material dispersion (72,000 fs2 and 40,000 fs3) of the optical path. Coherent back-reflection was eliminated using a Faraday isolator (Electro-Optics Technology). To eliminate optical errors induced by thermal drift we implemented motorized mirrors (AG-M100N, Newport) and quadrant detectors (PDQ80A, Thorlabs) in closed-loop circuits in front of, and also behind, the motorized four-prism sequence. Z focusing and lateral scanning was achieved by two separate pairs of AO deflectors, which were coupled to two achromatic lenses (NT32-886, Edmund Optics). Finally, the light was coupled to an upright two-photon microscope (Femto2D, Femtonics Ltd.) using a telecentric relay consisting of an Edmund Optics (47319, f=200 mm) and a Linos (QIOPTIQ, G32 2246 525, f=180 mm) lens. The excitation laser light was delivered to the sample, and the fluorescence signal was collected, using either a 20× Olympus objective (XLUMPlanFI20×/1.0 lens, 20×, NA 1.0) for population imaging, or a 25× Nikon objective (CF175 Apochromat 25×W MP, NA 1.1) for spine imaging. The fluorescence was spectrally separated into two spectral bands by filters and dichroic mirrors, and it was then delivered to GaAsP photomultiplier tubes (Hamamatsu) fixed directly on the objective arm, which allows deep imaging in over a 800 μm range with 2D galvano scanning. Because of the optical improvements and increase in the efficiency of the radio frequency drive of the AO deflectors, spatial resolution and scanning volume were increased by about 15% and 36-fold, respectively. New software modules were developed for fast 3D dendritic measurements, and to compensate for sample drift.


Motion Correction in 3D


Data resulting from the 3D ribbon scanning, multi-layer, multi-frame scanning, and chessboard scanning methods are stored in a 3D array as time series of 2D frames. The 2D frames are sectioned to bars matching the AO drifts to form the basic unit of our motion correction method. We selected the frame with the highest average intensity in the time series as a reference frame. Then we calculated cross correlation between each frame and bar and the corresponding bars of the reference frame to yield a set of displacement vectors in the data space. Displacement vector for each frame and for each bar is transformed to the Cartesian coordinate system of the sample knowing the scanning orientation for each bar. Noise bias is avoided by calculating the displacement vector of a frame as the median of the motion vectors of its bars. This common displacement vector of a single frame is transformed back to the data space. The resulting displacement vector for each bar in every frame is then used to shift the data of the bars using linear interpolation for subpixel precision. Gaps are filled with data from neighbouring bars, whenever possible.


Various modifications to the above disclosed embodiments will be apparent to a person skilled in the art without departing from the scope of protection determined by the attached claims.

Claims
  • 1. Method for scanning along a line (3D line) lying at an arbitrary direction in a 3D space with a given speed using a 3D laser scanning microscope having a first pair of acousto-optic deflectors deflecting a laser beam in the x-z plane (x axis deflectors) and a second pair of acousto-optic deflectors deflecting the laser beam in the y-z plane (y axis deflectors) for focusing the laser beam in 3D, the method comprising: determining the coordinates x0(0), y0(0), z0(0) of one end of the 3D line serving as the starting point,determining scanning speed vector components vx0, vy0, vzx0 (=vzy0) such that the magnitude of the scanning speed vector corresponds to the given scanning speed and the directions of the scanning speed vector corresponds to the direction of the 3D line,providing non-linear chirp signals in the x axis deflectors and providing non-linear chirp signals in the y axis deflectors such as to move a focus spot from the starting point at a speed defined by the speed vector components vx0, vy0, vzx0
  • 2. The method according to claim 1, wherein providing non-linear chirp signals in the x axis deflectors and providing non-linear chirp signals in the y axis deflectors comprises the steps of providing non-linear chirp signals in the x axis deflectors according to the function:
  • 3. The method according to claim 2, further comprising arranging a lens system after the two pairs of acousto-optic deflectors consisting of a first lens having a first focal distance, F1, a second lens having a second focal distance, F2, and an objective lens having a third focal distance, Fobj, and expressing the parameters Δf0x, bx1, bx2, cx1, cx2, Δf0y, by1, by2, cy1, and cy2 as
  • 4. Method for scanning a region of interest with a 3D laser scanning microscope having acousto-optic deflectors for focusing a laser beam within a 3D space defined by an optical axis (Z) of the microscope and X, Y axes that are perpendicular to the optical axis and to each other, the method comprising: selecting guiding points along the region of interest,fitting a 3D trajectory to the selected guiding points,extending each scanning point of the 3D trajectory to lines (3D lines) lying in the 3D space such as to extend partly in the direction of the optical axis, which 3D lines are transversal to the 3D trajectory at the given scanning points and which 3D lines, together, define a substantially continuous surface,scanning each 3D line by focusing the laser beam at one end of the 3D line and providing non-linear chirp signals for the acoustic frequencies in the deflectors for continuously moving the focus spot along the 3D line.
  • 5. The method according to claim 4, comprising extending each scanning point of the 3D trajectory to a plurality of parallel lines defining surfaces that are substantially transversal to the 3D trajectory at the given scanning points.
  • 6. The method according to claim 5, comprising extending each scanning point of the 3D trajectory to a plurality of parallel lines which lines, together, define a substantially continuous volume such that the 3D trajectory is located inside this volume.
  • 7. The method according to claim 4, wherein extending each scanning point of the 3D trajectory to a plurality of parallel lines defining cuboides that are substantially centred on the 3D trajectory at the given scanning points.
  • 8. The method according to claim 1, wherein the 3D lines are curved lines or substantially straight lines.
  • 9. The method according to claim 4, wherein the 3D lines are curved lines or substantially straight lines.
Priority Claims (1)
Number Date Country Kind
P1600519 Sep 2016 HU national
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation-in-part of PCT/HU2017/050035, filed on Aug. 31, 2017, which claims priority of Hungarian Patent Application No. P1600519, filed on Sep. 2, 2016, each of which is incorporated herein by reference.

Continuation in Parts (1)
Number Date Country
Parent PCT/HU2017/050035 Aug 2017 US
Child 16290238 US