Calculus of variations is utilized to minimize the elastic energy arising from the curvature squared while maximizing the van der Waals energy. Firstly, the shape of folded graphene sheets is investigated, and an arbitrary constant arising by integrating the Euler–Lagrange equation is determined. In this study, the structure is assumed to have a translational symmetry along the fold, so that the problem may be reduced to a two dimensional problem with reflective symmetry across the fold. Secondly, both variational calculus technique and least squared minimization procedure are employed to determine the joining structure involved a C60 fullerene and a carbon nanotube, namely a nanobud. We find that these two methods are in reasonable overall agreement. However, there is no experimental or simulation data to determine which procedure gives the more realistic results.
We discuss various optimisation-based approaches to machine learning. Tasks include regression, clustering, and classification. We discuss frequently used terms like 'unsupervised learning,' 'penalty methods,' and 'dual problem.' We motivate our discussion with simple examples and visualisations.
We investigate the construction of multidimensional prolate spheroidal wave functions using techniques from Clifford analysis. The prolates are eigenfunctions of a time-frequency limiting operator, but we show that they are also eigenfunctions of a differential operator. In an effort to compute solutions of this operator, we prove a Bonnet formula for a class of Clifford-Gegenbauer polynomials.
The models of collective decision-making considered in this presentation are nonlinear interconnected systems with saturating interactions, similar to Hopfield newtorks. These systems encode the possible outcomes of a decision process into different steady states of the dynamics. When the model is cooperative, i.e., when the underlying adjacency graph is Metzler, then the system is characterized by the presence of two main attractors, one positive and the other negative, representing two choices of agreement among the agents, associated to the Perron-Frobenius eigenvector of the system. Such equilibria are achieved when there is a sufficiently high 'social commitment' among the agent (here interpreted as a bifurcation parameter). When instead cooperation and antagonism coexist, the resulting signed graph is in general not structurally balanced, meaning that Perron-Frobenius theorem does not apply directly. It is shown that the decision-making process is affected by the distance to structural balance, in the sense that the higher the frustration of the graph, the higher the commitment strength at which the system bifurcates. In both cases, it is possible to give conditions on the commitment strength beyond which other equilibria start to appear. These extra bifurcations are related to the algebraic connectivity of the graph.
Sea ice acts as a refrigerator for the world. Its bright surface reflects solar heat, and the salt it expels during the freezing process drives cold water towards the equator. As a result, sea ice plays a crucial role in our climate system. Antarctic sea-ice extent has shown a large degree of regional variability, in stark contrast with the steady decreasing trend found in the Arctic. This variability is within the ranges of natural fluctuations, and may be ascribed to the high incidence of weather extremes, like intense cyclones, that give rise to large waves, significant wind drag, and ice deformation. The role exerted by waves on sea ice is still particular enigmatic and it has attracted a lot of attention over the past years. Starting from theoretical knowledge, new understanding based on experimental models and computational fluid dynamics is presented. But exploration of waves-in-ice cannot be exhausted without being on the field. And this is why I found myself in the middle of the Southern Ocean during a category five polar cyclone to measure waves…
Motivating in constructing conformal field theories Jones recently discovered a very general process that produces actions of the Thompson groups $F$,$T$ and $V$ such as unitary representations or actions on $C^{\ast}$-algebras. I will give a general panorama of this construction along with many examples and present various applications regarding analytical properties of groups and, if time permits, in lattice theory (e.g. quantum field theory).
Let $t$ be the the multiplicative inverse of the golden mean. In 1995 Sean Cleary introduced the irrational-slope Thompson's group $F_t$, which is the group of piecewise-linear maps of the interval $[0,1]$ with breaks in $Z[t]$ and slopes powers of $t$. In this talk we describe this group using tree-pair diagrams, and then demonstrate a ﬁnite presentation, a normal form, and prove that its commutator subgroup is simple. This group is the first example of a group of piecewise-linear maps of the interval whose abelianisation has torsion, and it is an open problem whether this group is a subgroup of Thompson's group $F$.
A Jonsson-Tarski algebra is a set X endowed with an
isomorphism $X\to XxX$. As observed by Freyd, the category of
Jonsson-Tarski algebras is a Grothendieck topos - a highly structured
mathematical object which is at once a generalised topological space,
and a generalised universe of sets.
In particular, one can do algebra, topology and functional analysis
inside the Jonsson-Tarski topos, and on doing so, the following objects
simply pop out: Cantor space; Thompson's group V; the Leavitt algebra
L2; the Cuntz semigroup S2; and the reduced $C^{\ast}-algebra of S2. The first
objective of this talk is to explain how this happens.
The second objective is to describe other "self-similar toposes"
associated to, for example, self-similar group actions, directed graphs
and higher-rank graphs; and again, each such topos contains within it a
familiar menagerie of algebraic-analytic objects. If time permits, I
will also explain a further intriguing example which gives rise to
Thompson's group F and, I suspect, the Farey AF algebra.
No expertise in topos theory is required; such background as is
necessary will be developed in the talk.
It is commonly expected that $e$, $\log 2$, $\sqrt{2}$, among other « classical » numbers, behave, in many respects, like almost all real numbers. For instance, their decimal expansion should contain every finite block of digits from $\{0, \ldots , 9\}$. We are very far away from establishing such a strong assertion. However, there has been some small recent progress in that direction. Let $\xi$ be an irrational real number. Its irrationality exponent, denoted by $\mu (\xi)$, is the supremum of the real numbers $\mu$ for which there are infinitely many integer pairs $(p, q)$ such that $|\xi - \frac{p}{q}| < q^{-\mu}$. It measures the quality of approximation to $\xi$ by rationals. We always have $\mu (\xi) \ge 2$, with equality for almost all real numbers and for irrational algebraic numbers (by Roth's theorem). We prove that, if the irrationality exponent of $\xi$ is equal to $2$ or slightly greater than $2$, then the decimal expansion of $\xi$ cannot be `too simple', in a suitable sense. Our result applies, among other classical numbers, to badly approximable numbers, non-zero rational powers of ${{\rm e}}$, and $\log (1 + \frac{1}{a})$, provided that the integer $a$ is sufficiently large. It establishes an unexpected connection between the irrationality exponent of a real number and its decimal expansion.
I introduce and demonstrate the Coq assisted theorem prover.
The old joke is that a topologist can’t distinguish between a coffee cup and a doughnut. A recent variant of Homology, called Persistent Homology, can be used in data analysis to understand the shape of data. I will give an introduction to persistent Homology and describe two example applications of this tool.
Imagine a world, where physical and chemical laboratories are unnecessary, because all experiments can be simulated accurately on a computer. In principle this is possible, solving the quantum mechanical Schrödinger equation. Unfortunately, this is far from trivial and practically impossible for large and complex materials and reactions. In 1998, Walter Kohn and John A Pople won the Nobel Prize in Chemistry for developing the density-functional theory (DFT). DFT allows to find solutions for the Schrödinger equation much more efficiently than ab-initio and similar approaches, thus enabling the computation of materials properties in an unprecedented way. In this seminar, I will introduce quantum mechanical principles and the basic idea of the DFT. Then, I will present an example of the computational elucidation of a reaction mechanism in materials science.
This project aims to investigate algebraic objects known as 0-dimensional groups, which are a mathematical tool for analysing the symmetry of infinite networks. Group theory has been used to classify possible types of symmetry in various contexts for nearly two centuries now, and 0-dimensional groups are the current frontier of knowledge. The expected outcome of the project is that the understanding of the abstract groups will be substantially advanced, and that this understanding will shed light on structures possessing 0-dimensional symmetry. In addition to being cultural achievements in their own right, advances in group theory such as this also often have significant translational benefits. This will provide benefits such as the creation of tools relevant to information science and researchers trained in the use of these tools.
The project aims to develop novel techniques to investigate Geometric analysis on infinite dimensional bundles, as well as Geometric analysis of pathological spaces with Cantor set as fibre, that arise in models for the fractional quantum Hall effect and topological matter, areas recognised with the 1998 and 2016 Nobel Prizes. Building on the applicant's expertise in the area, the project will involve postgraduate and postdoctoral training in order to enhance Australia's position at the forefront of international research in Geometric Analysis. Ultimately, the project will enhance Australia's leading position in the area of Index Theory by developing novel techniques to solve challenging conjectures, and mentoring HDR students and ECRs.
This project aims to solve hard, outstanding problems which have impeded our ability to progress in the area of quantum or noncommutative calculus. Calculus has provided an invaluable tool to science, enabling scientific and technological revolutions throughout the past two centuries. The project will initiate a program of collaboration among top mathematical researchers from around the world and bring together two separate mathematical areas into a powerful new set of tools. The outcomes from the project will impact research at the forefront of mathematical physics and other sciences and enhance Australia's reputation and standing.
Mahler's method in number theory is an area wherein one answers questions surrounding the transcendence and algebraic independence of both power series $F(z)$, which satisfy the functional equation $$a_0(z)F(z)+a_1(z)F(z^k)+\cdots+a_d(z)F(z^{k^d})=0$$ for some integers $k\geqslant 2$ and $d\geqslant 1$ and polynomials $a_0(z),\ldots,a_d(z)$, and their special values $F(\alpha)$, typically at algebraic numbers $\alpha$. The most important examples of Mahler functions arise from important sequences in theoretical computer science and dynamical systems, and many are related to digital properties of sets of numbers. For example, the generating function $T(z)$ of the Thue-Morse sequence, which is known to be the fixed point of a uniform morphism in computer science or equivalently a constant-length substitution system in dynamics, is a Mahler function. In 1930, Mahler proved that the numbers $T(\alpha)$ are transcendental for all non-zero algebraic numbers $\alpha$ in the complex open unit disc. With digital computers and computation so prevalent in our society, such results seem almost second nature these days and thinking about them is very natural. But what is one really trying to communicate by proving that functions or numbers such as those considered in Mahler's method?
In this talk, highlighting work from the very beginning of Mahler's career, we speculate---and provide some variations---on what Mahler was really trying to understand. This talk will combine modern and historical methods and will be accessible to students.
In this talk, we will present a brief overview of mathematical diffraction of structures with no translational symmetry but are not ruled out to exhibit long-range order. We introduce aperiodic tilings as toy models for such structures and discuss the relevant measure-theoretic formulation of the diffraction analysis. In particular, we focus on the component of the diffraction that suggests stochasticity but can be non-trivial for deterministic systems, and how its absence can be confirmed using some techniques involving Lyapunov exponents and Mahler measures. This is joint work with Michael Baake, Michael Coons, Franz Gaehler and Uwe Grimm.
The problem of packing space with regular tetrahedra has a 2000 year history. This talk surveys the history of work on the problem. It includes work by mathematicians, computer scientists, physicists, chemists, and materials scientists. Much progress has been made on it in recent years, yet there remain many unsolved problems.
In this talk, I will show how to build $C^*$-algebras using a family of local homeomorphisms. Then we will compute the KMS states of the resulted algebras using Laca-Neshveyev machinery. Then I will apply this result to $C^*$-algebras of $K$-graphs and obtain interesting $C^*$-algebraic information about $k$-graph algebras. This talk is based on a joint work with Astrid an Huef and Iain Raeburn.
The KMS condition for equilibrium states of C*-dynamical systems has been around since the 1960’s. With the introduction of systems arising from number theory and from semigroup dynamics following pioneering work of Bost and Connes, their study has accelerated significantly in the last 25 years. I will give a brief introduction to C*-dynamical systems and their KMS states and discuss two constructions that exhibit fascinating connections with key open questions in mathematics such as Hilbert’s 12th problem on explicit class field theory and Furstenberg’s x2 x3 conjecture.
Using a variant of the Laca-Raeburn program for calculating KMS states, Laca, Raeburn, Ramagge and Whittaker showed that, at any inverse temperature above a critical value, the KMS states arising from self-similar actions of groups (or groupoids) $G$ are parameterised by traces on C*(G). The parameterisation takes the form of a self-mapping \chi of the trace space of C*(G) that is built from the structure of the stabilisers of the self-similar action. I will outline how this works, and then sketch how to see that \chi has a unique fixed-point, which picks out the ``preferred" trace of C*(G) corresponding to the only KMS state that persists at the critical inverse temperature. The first part of this will be an exposition of results of Laca-Raeburn-Ramagge-Whittaker. The second part is joint work with Joan Claramunt.
Zombies are a popular figure in pop culture/entertainment and they are usually portrayed as being brought about through an outbreak or epidemic. Consequently, we model a zombie attack, using biological assumptions based on popular zombie movies. We introduce a basic model for zombie infection, determine equilibria and their stability, and illustrate the outcome with numerical solutions. We then refine the model to introduce a latent period of zombification, whereby humans are infected, but not infectious, before becoming undead. We then modify the model to include the effects of possible quarantine or a cure. Finally, we examine the impact of regular, impulsive reductions in the number of zombies and derive conditions under which eradication can occur. We show that only quick, aggressive attacks can stave off the doomsday scenario: the collapse of society as zombies overtake us all.
During my study leave in 2018 I have applied nonlinear stability analysis techniques to the Douglas-Rachford Algorithm, with the aim of shedding light on the interesting non-convex case, where convergence is often observed but seldom proven. The Douglas-Rachford Algorithm can solve optimisation and feasibility problems, provably converges weakly to solutions in the convex case, and constitutes a practical heuristic in non-convex cases. Lyapunov functions are stability certificates for difference inclusions in nonlinear stability analysis. Some other recent nonlinear stability results are showcased as well.