Jump to content

User:Homayoun mh/sandbox

From Wikipedia, the free encyclopedia

Functional (mathematics)

[edit]

Local vs non-local

[edit]

If a functional's value can be computed for small segments of the input curve and then summed to find the total value, a function is called local. Otherwise it is called non-local. For example:

is local while

is non-local. This occurs commonly when integrals occur separately in the numerator and denominator of an equation such as in calculations of center of mass.

 Linear functionals

[edit]

Linear functionals first appeared in functional analysis, the study of vector spaces of functions.  A typical example of a linear functional is integration: the linear transformation defined by the Riemann integral

is a linear functional from the vector space C[a,b] of continuous functions on the interval [ab] to the real numbers.  The linearity of I(f) follows from the standard facts about the integral:

Functional derivative

[edit]

The functional derivative is defined first; Then the functional differential is defined in terms of the functional derivative.

Functional derivative

[edit]

Given a manifold M representing (continuous/smooth/with certain boundary conditions/etc.) functions ρ and a functional F defined as

the functional derivative of F[ρ], denoted δF/δρ, is defined by[1]

where is an arbitrary function. is called the variation of ρ.

Functional differential

[edit]

The differential (or variation or first variation) of the functional F[ρ] is,[2] [Note 1]

where δρ(x) = εϕ(x) is the variation of ρ(x).[clarification needed] This is similar in form to the total differential of a function F(ρ1, ρ2, ..., ρn),

where ρ1, ρ2, ... , ρn are independent variables. Comparing the last two equations, the functional derivative δF/δρ(x) has a role similar to that of the partial derivative ∂F/∂ρi , where the variable of integration x is like a continuous version of the summation index i.[3]

Properties

[edit]

Like the derivative of a function, the functional derivative satisfies the following properties, where F[ρ] and G[ρ] are functionals:

  constant,
  • Product rule:[5]
  • Chain rules:
If f is a differentiable function, then
[6]
[7]

Lemmas

[edit]

where ρ = ρ(r) and f = f (r, ρ, ∇ρ). This formula is for the case of the functional form given by F[ρ] at the beginning of this section. For other functional forms, the definition of the functional derivative can be used as the starting point for its determination. (See the example Coulomb potential energy functional.)

Proof: given a functional

and a function ϕ(r) that vanishes on the boundary of the region of integration, from a previous section Definition,

The second line is obtained using the total derivative, where ∂f /∂∇ρ is a derivative of a scalar with respect to a vector.[Note 2] The third line was obtained by use of a product rule for divergence. The fourth line was obtained using the divergence theorem and the condition that ϕ=0 on the boundary of the region of integration. Since ϕ is also an arbitrary function, applying the fundamental lemma of calculus of variations to the last line, the functional derivative is obtained.

Examples

[edit]

Thomas–Fermi kinetic energy functional

[edit]

The Thomas–Fermi model of 1927 used a kinetic energy functional for a noninteracting uniform electron gas in a first attempt of density-functional theory of electronic structure:

Since the integrand of TTF[ρ] does not involve derivatives of ρ(r), the functional derivative of TTF[ρ] is,[8]

Coulomb potential energy functional

[edit]

For the electron-nucleus potential, Thomas and Fermi employed the Coulomb potential energy functional

Applying the definition of functional derivative,

So,

For the classical part of the electron-electron interaction, Thomas and Fermi employed the Coulomb potential energy functional

From the definition of the functional derivative,

The first and second terms on the right hand side of the last equation are equal, since r and r′ in the second term can be interchanged without changing the value of the integral. Therefore,

and the functional derivative of the electron-electron coulomb potential energy functional J[ρ] is,[9]

The second functional derivative is

Weizsäcker kinetic energy functional

[edit]

In 1935 von Weizsäcker proposed to add a gradient correction to the Thomas-Fermi kinetic energy functional to make it suit better a molecular electron cloud:

where

Using a previously derived formula for the functional derivative,

and the result is,[10]

Entropy

[edit]

The entropy of a discrete random variable is a functional of the probability mass function.

Thus,

Thus,

Exponential

[edit]

Let

Using the delta function as a test function,

Thus,

This is particularly useful in calculating the correlation functions from the partition function in quantum field theory.

Functional derivative of a function

[edit]

A function can be written in the form of an integral like a functional. For example,

Since the integrand does not depend on derivatives of ρ, the functional derivative of ρ(r) is,

Consider the functional

where f ′(x) ≡ df/dx. If f is varied by adding to it a function δf, and the resulting integrand L(x, f +δf, f '+δf ′) is expanded in powers of δf, then the change in the value of J to first order in δf can be expressed as follows:[11][Note 3]

The coefficient of δf(x), denoted as δJ/δf(x), is called the functional derivative of J with respect to f at the point x.[3] For this example functional, the functional derivative is the left hand side of the Euler-Lagrange equation,[12]

Using the delta function as a test function

[edit]

In physics, it's common to use the Dirac delta function in place of a generic test function , for yielding the functional derivative at the point (this is a point of the whole functional derivative as a partial derivative is a component of the gradient):

This works in cases when formally can be expanded as a series (or at least up to first order) in . The formula is however not mathematically rigorous, since is usually not even defined.

The definition given in a previous section is based on a relationship that holds for all test functions ϕ, so one might think that it should hold also when ϕ is chosen to be a specific function such as the delta function.

Notes

[edit]
  1. ^ Called differential in (Parr & Yang 1989, p. 246), variation or first variation in (Courant & Hilbert 1953, p. 186), and variation or differential in (Gelfand & Fomin 2000, p. 11, § 3.2).
  2. ^ For a three-dimensional cartesian coordinate system,
  3. ^ According to Giaquinta & Hildebrandt (1996, p. 18), this notation is customary in physical literature.

Category:Differential calculus

Category:Topological vector spaces Category:Differential operators Category:Calculus of variations Category:Variational analysis* Category:Types of functions

Thomas–Fermi model

[edit]

The predecessor to density functional theory was the Thomas–Fermi model, developed independently by both Thomas and Fermi in 1927. They used a statistical model to approximate the distribution of electrons in an atom. The mathematical basis postulated that electrons are distributed uniformly in phase space with two electrons in every of volume.[13] For each element of coordinate space volume we can fill out a sphere of momentum space up to the Fermi momentum [14]


Kinetic energy

[edit]

For a small volume element ΔV, and for the atom in its ground state, we can fill out a spherical momentum space volume Vf  up to the Fermi momentum pf , and thus,[15]

where is a point in ΔV.

The corresponding phase space volume is

The electrons in ΔVph  are distributed uniformly with two electrons per h3 of this phase space volume, where h is Planck's constant.[16] Then the number of electrons in ΔVph  is

The number of electrons in ΔV  is

where is the electron density.

The fraction of electrons at that have momentum between p and p+dp is,

Using the classical expression for the kinetic energy of an electron with mass me, the kinetic energy per unit volume at for the electrons of the atom is,

where a previous expression relating to has been used and,

Integrating the kinetic energy per unit volume over all space, results in the total kinetic energy of the electrons,[17]

This result shows that the total kinetic energy of the electrons can be expressed in terms of only the spatially varying electron density according to the Thomas–Fermi model. As such, they were able to calculate the energy of an atom using this expression for the kinetic energy combined with the classical expressions for the nuclear-electron and electron-electron interactions (which can both also be represented in terms of the electron density).

Potential energies

[edit]

The potential energy of an atom's electrons, due to the electric attraction of the positively charged nucleus is,

where is the potential energy of an electron at that is due to the electric field of the nucleus. For the case of a nucleus centered at with charge Ze, where Z is a positive integer and e is the elementary charge,

The potential energy of the electrons due to their mutual electric repulsion is,

Total energy

[edit]

The total energy of the electrons is the sum of their kinetic and potential energies,[18]

Inaccuracies and improvements

[edit]

Although this was an important first step, the Thomas–Fermi equation's accuracy is limited because the resulting expression for the kinetic energy is only approximate, and because the method does not attempt to represent the exchange energy of an atom as a conclusion of the Pauli principle. A term for the exchange energy was added by Dirac in 1928.

However, the Thomas–Fermi–Dirac theory remained rather inaccurate for most applications. The largest source of error was in the representation of the kinetic energy, followed by the errors in the exchange energy, and due to the complete neglect of electron correlation.

In 1962, Edward Teller showed that Thomas–Fermi theory cannot describe molecular bonding – the energy of any molecule calculated with TF theory is higher than the sum of the energies of the constituent atoms. More generally, the total energy of a molecule decreases when the bond lengths are uniformly increased.[19][20][21][22] This can be overcome by improving the expression for the kinetic energy.[23]

The Thomas–Fermi kinetic energy can be improved by adding to it the Weizsäcker (1935) correction:,[24] which can then make a much improved Thomas–Fermi–Dirac–Weizsaecher density functional theory (TFDW-DFT), which would be equivalent to the Hartree and then Hartree–Fock mean field theories which do not treat static electron correlation (treated by the CASSCF theory developed by Bjorn Roos' group in Lund, Sweden), and dynamic correlation (treated by both Moeller–Plesset perturbation theory to second order (MP2) or CASPT2, the extension of MP2 theory to systems not well treated by simple single reference/configuration methods like Hartree–Fock theory and Kohn–Sham DFT. Note that KS-DFT has also been extended to treat systems for which the ground electronic state is not well represented by either a single Slater determinant of Hartree–Fock or "Kohn–Sham" orbitals, the so-called CAS-DFT method, also being developed in the group of Bjorn Roos in Lund.


Category:Atomic physics Category:Density functional theory

Pauli exclusion principle, Connection to quantum state symmetry

[edit]

The Pauli exclusion principle with a single-valued many-particle wavefunction is equivalent to requiring the wavefunction to be antisymmetric. An antisymmetric two-particle state is represented as a sum of states in which one particle is in state and the other in state :

and antisymmetry under exchange means that

This implies A(x,y) = 0 when x=y, which is Pauli exclusion. It is true in any basis, since unitary changes of basis keep antisymmetric matrices antisymmetric, although strictly speaking, the quantity A(x,y) is not a matrix but an antisymmetric rank-two tensor.

Conversely, if the diagonal quantities A(x,x) are zero in every basis, then the wavefunction component:

is necessarily antisymmetric.

Quantum mechanical description of identical particles

[edit]

Symmetrical and anti-symmetrical states

[edit]
Antisymmetric wavefunction for a (fermionic) 2-particle state in an infinite square well potential.

Let us define a linear operator P, called the exchange operator. When it acts on a tensor product of two state vectors, it exchanges the values of the state vectors:

P is both Hermitian and unitary. Because it is unitary, we can regard it as a symmetry operator. We can describe this symmetry as the symmetry under the exchange of labels attached to the particles (i.e., to the single-particle Hilbert spaces).

Clearly, (the identity operator), so the eigenvalues of P are +1 and −1. The corresponding eigenvectors are the symmetric and antisymmetric states:

In other words, symmetric and antisymmetric states are essentially unchanged under the exchange of particle labels: they are only multiplied by a factor of +1 or −1, rather than being "rotated" somewhere else in the Hilbert space. This indicates that the particle labels have no physical meaning, in agreement with our earlier discussion on indistinguishability.

We have mentioned that P is Hermitian. As a result, it can be regarded as an observable of the system, which means that we can, in principle, perform a measurement to find out if a state is symmetric or antisymmetric. Furthermore, the equivalence of the particles indicates that the Hamiltonian can be written in a symmetrical form, such as

It is possible to show that such Hamiltonians satisfy the commutation relation

According to the Heisenberg equation, this means that the value of P is a constant of motion. If the quantum state is initially symmetric (antisymmetric), it will remain symmetric (antisymmetric) as the system evolves. Mathematically, this says that the state vector is confined to one of the two eigenspaces of P, and is not allowed to range over the entire Hilbert space. Thus, we might as well treat that eigenspace as the actual Hilbert space of the system. This is the idea behind the definition of Fock space.

Symmetric wavefunction for a (bosonic) 2-particle state in an infinite square well potential.

We will now make the above discussion concrete, using the formalism developed in the article on the mathematical formulation of quantum mechanics.

Let n denote a complete set of (discrete) quantum numbers for specifying single-particle states (for example, for the particle in a box problem we can take n to be the quantized wave vector of the wavefunction.) For simplicity, consider a system composed of two identical particles. Suppose that one particle is in the state n1, and another is in the state n2. What is the quantum state of the system? Intuitively, it should be

which is simply the canonical way of constructing a basis for a tensor product space of the combined system from the individual spaces. However, this expression implies the ability to identify the particle with n1 as "particle 1" and the particle with n2 as "particle 2". If the particles are indistinguishable, this is impossible by definition; either particle can be in either state. It turns out that we must have:[25] [clarification needed]

to see this, imagine a two identical particle system. suppose we know that one of the particles is in state and the other is in state . prior to the measurement, there is no way to know if particle 1 is in state and particle 2 is in state , or the other way around because the particles are indistinguishable. and so, there are equal probabilities for each of the states to occur - meaning that the system is in superposition of both states prior to the measurement.

States where this is a sum are known as symmetric; states involving the difference are called antisymmetric. More completely, symmetric states have the form

while antisymmetric states have the form

Note that if n1 and n2 are the same, the antisymmetric expression gives zero, which cannot be a state vector as it cannot be normalized. In other words, in an antisymmetric state two identical particles cannot occupy the same single-particle states. This is known as the Pauli exclusion principle, and it is the fundamental reason behind the chemical properties of atoms and the stability of matter.

Exchange symmetry

[edit]

The importance of symmetric and antisymmetric states is ultimately based on empirical evidence. It appears to be a fact of nature that identical particles do not occupy states of a mixed symmetry, such as

There is actually an exception to this rule, which we will discuss later. On the other hand, we can show that the symmetric and antisymmetric states are in a sense special, by examining a particular symmetry of the multiple-particle states known as exchange symmetry.

N particles

[edit]

The above discussion generalizes readily to the case of N particles. Suppose we have N particles with quantum numbers n1, n2, ..., nN. If the particles are bosons, they occupy a totally symmetric state, which is symmetric under the exchange of any two particle labels:

Here, the sum is taken over all different states under permutation p of the N elements. The square root left to the sum is a normalizing constant. The quantity nj stands for the number of times each of the single-particle states appears in the N-particle state. In the following matrix each row represents one permutation of N elements.

If we choose the first row as a reference, the next rows, imply one permutation, the next rows imply two permutations, and so on. So the number of rows with k permutations with regard to the first row would be .

In the same vein, fermions occupy totally antisymmetric states:

Here, is the signature of each permutation (i.e. if is composed of an even number of transpositions, and if odd.) Note that we have omitted the term, because each single-particle state can appear only once in a fermionic state. Otherwise the sum would again be zero due to the antisymmetry, thus representing a physically impossible state. This is the Pauli exclusion principle for many particles.

These states have been normalized so that

Measurements of identical particles

[edit]

Suppose we have a system of N bosons (fermions) in the symmetric (antisymmetric) state

and we perform a measurement of some other set of discrete observables, m. In general, this would yield some result m1 for one particle, m2 for another particle, and so forth. If the particles are bosons (fermions), the state after the measurement must remain symmetric (antisymmetric), i.e.

The probability of obtaining a particular result for the m measurement is

We can show that

which verifies that the total probability is 1. Note that we have to restrict the sum to ordered values of m1, ..., mN to ensure that we do not count each multi-particle state more than once.

Wavefunction representation

[edit]

So far, we have worked with discrete observables. We will now extend the discussion to continuous observables, such as the position x.

Recall that an eigenstate of a continuous observable represents an infinitesimal range of values of the observable, not a single value as with discrete observables. For instance, if a particle is in a state |ψ⟩, the probability of finding it in a region of volume d3x surrounding some position x is

As a result, the continuous eigenstates |x⟩ are normalized to the delta function instead of unity:

We can construct symmetric and antisymmetric multi-particle states out of continuous eigenstates in the same way as before. However, it is customary to use a different normalizing constant:

We can then write a many-body wavefunction,

where the single-particle wavefunctions are defined, as usual, by

The most important property of these wavefunctions is that exchanging any two of the coordinate variables changes the wavefunction by only a plus or minus sign. This is the manifestation of symmetry and antisymmetry in the wavefunction representation:

The many-body wavefunction has the following significance: if the system is initially in a state with quantum numbers n1, ..., nN, and we perform a position measurement, the probability of finding particles in infinitesimal volumes near x1, x2, ..., xN is

The factor of N! comes from our normalizing constant, which has been chosen so that, by analogy with single-particle wavefunctions,

Because each integral runs over all possible values of x, each multi-particle state appears N! times in the integral. In other words, the probability associated with each event is evenly distributed across N! equivalent points in the integral space. Because it is usually more convenient to work with unrestricted integrals than restricted ones, we have chosen our normalizing constant to reflect this.

Finally, it is interesting to note that antisymmetric wavefunction can be written as the determinant of a matrix, known as a Slater determinant:


Hartree-Fock (HF)

[edit]

Hartree–Fock algorithm

[edit]

The Hartree–Fock method is typically used to solve the time-independent Schrödinger equation for a multi-electron atom or molecule as described in the w:Born–Oppenheimer approximation. Since there are no known solutions for many-electron systems (hydrogenic atoms and the diatomic hydrogen cation being notable one-electron exceptions), the problem is solved numerically. Due to the nonlinearities introduced by the Hartree–Fock approximation, the equations are solved using a nonlinear method such as w:iteration, which gives rise to the name "self-consistent field method."

Greatly simplified algorithmic flowchart illustrating the Hartree–Fock method

Approximations

[edit]

The Hartree–Fock method makes five major simplifications in order to deal with this task:

  • The w:Born–Oppenheimer approximation is inherently assumed. The full molecular wave function is actually a function of the coordinates of each of the nuclei, in addition to those of the electrons.
  • Typically, relativistic effects are completely neglected. The momentum operator is assumed to be completely non-relativistic.
  • The variational solution is assumed to be a w:linear combination of a finite number of basis functions, which are usually (but not always) chosen to be w:orthogonal. The finite basis set is assumed to be approximately complete.
  • Each w:energy eigenfunction is assumed to be describable by a single w:Slater determinant, an antisymmetrized product of one-electron wave functions (i.e., orbitals).
  • The mean field approximation is implied. Effects arising from deviations from this assumption, known as w:electron correlation, are completely neglected for the electrons of opposite spin, but are taken into account for electrons of parallel spin.[26][27] (Electron correlation should not be confused with electron exchange, which is fully accounted for in the Hartree–Fock method.)[27]

Relaxation of the last two approximations give rise to many so-called w:post-Hartree–Fock methods.

The Fock operator

[edit]

Because the electron-electron repulsion term of the w:electronic molecular Hamiltonian involves the coordinates of two different electrons, it is necessary to reformulate it in an approximate way. Under this approximation, (outlined under Hartree–Fock algorithm), all of the terms of the exact Hamiltonian except the nuclear-nuclear repulsion term are re-expressed as the sum of one-electron operators outlined below, for closed-shell atoms or molecules (with two electrons in each spatial orbital).[28] The "(1)" following each operator symbol simply indicates that the operator is 1-electron in nature.

where is the one-electron Fock operator generated by the orbitals , and

is the one-electron core Hamiltonian. Also is the w:Coulomb operator,

defining the electron-electron repulsion energy due to each of the two electrons in the jth orbital.[28]

Finally is the w:exchange operator, defining the electron exchange energy due to the antisymmetry of the total n-electron wave function. [28]

This (so called) "exchange energy" operator, K, is simply an artifact of the Slater determinant.

Finding the Hartree–Fock one-electron wave functions is now equivalent to solving the eigenfunction equation:

where are a set of one-electron wave functions, called the Hartree–Fock molecular orbitals.

Fock matrix

[edit]

In the w:Hartree–Fock method of w:quantum mechanics, the Fock matrix is a matrix approximating the single-electron w:energy operator of a given quantum system in a given set of basis vectors.[29]

It is most often formed in w:computational chemistry when attempting to solve the w:Roothaan equations for an atomic or molecular system. The Fock matrix is actually an approximation to the true Hamiltonian operator of the quantum system. It includes the effects of electron-electron repulsion only in an average way. Importantly, because the Fock operator is a one-electron operator, it does not include the w:electron correlation energy.

The Fock matrix is defined by the Fock operator. For the restricted case which assumes w:closed-shell orbitals and single-determinantal wavefunctions, the Fock operator for the i-th electron is given by:[30]

where:

is the Fock operator for the i-th electron in the system,
is the w:one-electron hamiltonian for the i-th electron,
is the number of electrons and is the number of occupied orbitals in the closed-shell system,
is the w:Coulomb operator, defining the repulsive force between the j-th and i-th electrons in the system,
is the w:exchange operator, defining the quantum effect produced by exchanging two electrons.

The Coulomb operator is multiplied by two since there are two electrons in each occupied orbital. The exchange operator is not multiplied by two since it has a non-zero result only for electrons which have the same spin as the i-th electron.

For systems with unpaired electrons there are many choices of Fock matrices.

Linear combination of atomic orbitals

[edit]

Typically, in modern Hartree–Fock calculations, the one-electron wave functions are approximated by a w:linear combination of atomic orbitals. These atomic orbitals are called w:Slater-type orbitals. Furthermore, it is very common for the "atomic orbitals" in use to actually be composed of a linear combination of one or more Gaussian-type orbitals, rather than Slater-type orbitals, in the interests of saving large amounts of computation time.

Various basis sets are used in practice, most of which are composed of Gaussian functions. In some applications, an orthogonalization method such as the w:Gram–Schmidt process is performed in order to produce a set of orthogonal basis functions. This can in principle save computational time when the computer is solving the Roothaan–Hall equations by converting the w:overlap matrix effectively to an w:identity matrix. However, in most modern computer programs for molecular Hartree–Fock calculations this procedure is not followed due to the high numerical cost of orthogonalization and the advent of more efficient, often sparse, algorithms for solving the w:generalized eigenvalue problem, of which the Roothaan–Hall equations are an example.

DFT Derivation and formalism

[edit]

As usual in many-body electronic structure calculations, the nuclei of the treated molecules or clusters are seen as fixed (the Born–Oppenheimer approximation), generating a static external potential V in which the electrons are moving. A stationary electronic state is then described by a wavefunction satisfying the many-electron time-independent Schrödinger equation

where, for the -electron system, is the Hamiltonian, is the total energy, is the kinetic energy, is the potential energy from the external field due to positively charged nuclei, and is the electron-electron interaction energy. The operators and are called universal operators as they are the same for any -electron system, while is system dependent. This complicated many-particle equation is not separable into simpler single-particle equations because of the interaction term .

There are many sophisticated methods for solving the many-body Schrödinger equation based on the expansion of the wavefunction in Slater determinants. While the simplest one is the Hartree–Fock method, more sophisticated approaches are usually categorized as post-Hartree–Fock methods. However, the problem with these methods is the huge computational effort, which makes it virtually impossible to apply them efficiently to larger, more complex systems.

Here DFT provides an appealing alternative, being much more versatile as it provides a way to systematically map the many-body problem, with , onto a single-body problem without . In DFT the key variable is the particle density which for a normalized is given by

This relation can be reversed, i.e. for a given ground-state density it is possible, in principle, to calculate the corresponding ground-state wavefunction . In other words, is a unique functional of ,[31]

and consequently the ground-state expectation value of an observable is also a functional of

In particular, the ground-state energy is a functional of

where the contribution of the external potential can be written explicitly in terms of the ground-state density

More generally, the contribution of the external potential can be written explicitly in terms of the density ,

The functionals and are called universal functionals, while is called a non-universal functional, as it depends on the system under study. Having specified a system, i.e., having specified , one then has to minimize the functional

with respect to , assuming one has got reliable expressions for and . A successful minimization of the energy functional will yield the ground-state density and thus all other ground-state observables.

The variational problems of minimizing the energy functional can be solved by applying the Lagrangian method of undetermined multipliers.[32] First, one considers an energy functional that doesn't explicitly have an electron-electron interaction energy term,

where denotes the kinetic energy operator and is an external effective potential in which the particles are moving, so that .

Thus, one can solve the so-called Kohn–Sham equations of this auxiliary non-interacting system,

which yields the orbitals that reproduce the density of the original many-body system

The effective single-particle potential can be written in more detail as

where the second term denotes the so-called Hartree term describing the electron-electron Coulomb repulsion, while the last term is called the exchange-correlation potential. Here, includes all the many-particle interactions. Since the Hartree term and depend on , which depends on the , which in turn depend on , the problem of solving the Kohn–Sham equation has to be done in a self-consistent (i.e., iterative) way. Usually one starts with an initial guess for , then calculates the corresponding and solves the Kohn–Sham equations for the . From these one calculates a new density and starts again. This procedure is then repeated until convergence is reached. A non-iterative approximate formulation called Harris functional DFT is an alternative approach to this.

NOTE: The one-to-one correspondence between electron density and single-particle potential is not so smooth. It contains kinds of non-analytic structure. contains kinds of singularities. This may indicate a limitation of our hope for representing exchange-correlation functional in a simple form.

Approximations (exchange-correlation functionals)

[edit]

The major problem with DFT is that the exact functionals for exchange and correlation are not known except for the free electron gas. However, approximations exist which permit the calculation of certain physical quantities quite accurately. In physics the most widely used approximation is the local-density approximation (LDA), where the functional depends only on the density at the coordinate where the functional is evaluated:

The local spin-density approximation (LSDA) is a straightforward generalization of the LDA to include electron spin:

Highly accurate formulae for the exchange-correlation energy density have been constructed from quantum Monte Carlo simulations of jellium.[33]

Generalized gradient approximations (GGA) are still local but also take into account the gradient of the density at the same coordinate:

Using the latter (GGA) very good results for molecular geometries and ground-state energies have been achieved.

Potentially more accurate than the GGA functionals are the meta-GGA functionals, a natural development after the GGA (generalized gradient approximation). Meta-GGA DFT functional in its original form includes the second derivative of the electron density (the Laplacian) whereas GGA includes only the density and its first derivative in the exchange-correlation potential.

Functionals of this type are, for example, TPSS and the Minnesota Functionals. These functionals include a further term in the expansion, depending on the density, the gradient of the density and the Laplacian (second derivative) of the density.

Difficulties in expressing the exchange part of the energy can be relieved by including a component of the exact exchange energy calculated from Hartree–Fock theory. Functionals of this type are known as hybrid functionals.

Hohenberg–Kohn theorems

[edit]

1.If two systems of electrons, one trapped in a potential and the other in , have the same ground-state density then necessarily .

Corollary: the ground state density uniquely determines the potential and thus all properties of the system, including the many-body wave function. In particular, the "HK" functional, defined as is a universal functional of the density (not depending explicitly on the external potential).

2. For any positive integer and potential it exists a density functional such that obtains its minimal value at the ground-state density of electrons in the potential . The minimal value of is then the ground state energy of this system.

Pseudo-potentials

[edit]

The many electron Schrödinger equation can be very much simplified if electrons are divided in two groups: valence electrons and inner core electrons. The electrons in the inner shells are strongly bound and do not play a significant role in the chemical binding of atoms; they also partially screen the nucleus, thus forming with the nucleus an almost inert core. Binding properties are almost completely due to the valence electrons, especially in metals and semiconductors. This separation suggests that inner electrons can be ignored in a large number of cases, thereby reducing the atom to an ionic core that interacts with the valence electrons. The use of an effective interaction, a pseudopotential, that approximates the potential felt by the valence electrons, was first proposed by Fermi in 1934 and Hellmann in 1935. In spite of the simplification pseudo-potentials introduce in calculations, they remained forgotten until the late 50's.

Ab initio Pseudo-potentials

A crucial step toward more realistic pseudo-potentials was given by Topp and Hopfield and more recently Cronin, who suggested that the pseudo-potential should be adjusted such that they describe the valence charge density accurately. Based on that idea, modern pseudo-potentials are obtained inverting the free atom Schrödinger equation for a given reference electronic configuration and forcing the pseudo wave-functions to coincide with the true valence wave functions beyond a certain distance . The pseudo wave-functions are also forced to have the same norm as the true valence wave-functions and can be written as

where is the radial part of the wavefunction with angular momentum , and and denote, respectively, the pseudo wave-function and the true (all-electron) wave-function. The index n in the true wave-functions denotes the valence level. The distance beyond which the true and the pseudo wave-functions are equal, , is also -dependent.

  1. ^ (Parr & Yang 1989, p. 246, Eq. A.2).
  2. ^ (Parr & Yang 1989, p. 246, Eq. A.1).
  3. ^ a b (Parr & Yang 1989, p. 246).
  4. ^ (Parr & Yang 1989, p. 247, Eq. A.3).
  5. ^ (Parr & Yang 1989, p. 247, Eq. A.4).
  6. ^ (Greiner & Reinhardt 1996, p. 38, Eq. 7).
  7. ^ (Parr & Yang 1989, p. 251, Eq. A.34).
  8. ^ (Parr & Yang 1989, p. 247, Eq. A.6).
  9. ^ (Parr & Yang 1989, p. 248, Eq. A.11).
  10. ^ (Parr & Yang 1989, p. 247, Eq. A.9).
  11. ^ (Giaquinta & Hildebrandt 1996, p. 18)
  12. ^ (Gelfand & Fomin 2000, p. 28)
  13. ^ (Parr & Yang 1989, p. 47)
  14. ^ March, N. H. (1992). Electron Density Theory of Atoms and Molecules. Academic Press. p. 24. ISBN 0-12-470525-1.
  15. ^ March 1992, p.24
  16. ^ Parr and Yang 1989, p.47
  17. ^ March 1983, p. 5, Eq. 11
  18. ^ March 1983, p. 6, Eq. 15
  19. ^ Teller, E. (1962). "On the Stability of molecules in the Thomas–Fermi theory". Rev. Mod. Phys. 34 (4): 627–631. Bibcode:1962RvMP...34..627T. doi:10.1103/RevModPhys.34.627.
  20. ^ Balàzs, N. (1967). "Formation of stable molecules within the statistical theory of atoms". Phys. Rev. 156 (1): 42–47. Bibcode:1967PhRv..156...42B. doi:10.1103/PhysRev.156.42.
  21. ^ Lieb, Elliott H.; Simon, Barry (1977). "The Thomas–Fermi theory of atoms, molecules and solids". Adv. In Math. 23 (1): 22–116. doi:10.1016/0001-8708(77)90108-6.{{cite journal}}: CS1 maint: date and year (link)
  22. ^ Parr and Yang 1989, pp.114–115
  23. ^ Parr and Yang 1989, p.127
  24. ^ Weizsäcker, C. F. v. (1935). "Zur Theorie der Kernmassen". Zeitschrift für Physik. 96 (7–8): 431–58. Bibcode:1935ZPhy...96..431W. doi:10.1007/BF01337700.
  25. ^ http://www.tcm.phy.cam.ac.uk/~pdh1001/thesis/node14.html
  26. ^ Hinchliffe, Alan (2000). Modelling Molecular Structures (2nd ed.). Baffins Lane, Chichester, West Sussex PO19 1UD, England: John Wiley & Sons Ltd. p. 186. ISBN 0-471-48993-X.{{cite book}}: CS1 maint: location (link)
  27. ^ a b Szabo, A.; Ostlund, N. S. (1996). Modern Quantum Chemistry. Mineola, New York: Dover Publishing. ISBN 0-486-69186-1.
  28. ^ a b c Levine, Ira N. (1991). Quantum Chemistry (4th ed.). Englewood Cliffs, New Jersey: Prentice Hall. p. 403. ISBN 0-205-12770-3.
  29. ^ Callaway, J. (1974). Quantum Theory of the Solid State. New York: Academic Press. ISBN 9780121552039.
  30. ^ Levine, I.N. (1991) Quantum Chemistry (4th ed., Prentice-Hall), p.403
  31. ^ Cite error: The named reference Hohenberg1964 was invoked but never defined (see the help page).
  32. ^ Kohn, W.; Sham, L. J. (1965). "Self-consistent equations including exchange and correlation effects". Physical Review. 140 (4A): A1133–A1138. Bibcode:1965PhRv..140.1133K. doi:10.1103/PhysRev.140.A1133.
  33. ^ John P. Perdew, Adrienn Ruzsinszky, Jianmin Tao, Viktor N. Staroverov, Gustavo Scuseria and Gábor I. Csonka (2005). "Prescriptions for the design and selection of density functional approximations: More constraint satisfaction with fewer fits". Journal of Chemical Physics. 123 (6): 062201. Bibcode:2005JChPh.123f2201P. doi:10.1063/1.1904565. PMID 16122287.{{cite journal}}: CS1 maint: multiple names: authors list (link)