This talk will be about the new course in mathematics at the University of Newcastle, MATH2005, Einstein, Bach and the Taj Mahal: Symmetry in the Arts, Sciences and Humanities. The course handbook description is:
Symmetry is an organising principle that plays a role, often unrecognised, in a vast range of disciplines, from mathematics and the physical sciences to music, design and the arts. This course aims to introduce students from a variety of disciplines to symmetry and its consequences. While symmetry is associated with beauty, balance and harmony, it is also associated with conservation, stasis and boredom, and on its own symmetry is not enough to explain the richness, diversity and dynamism of the universe. In contrast, the concept of symmetry breaking is associated with transitions and evolution, and linked to self-organisation, emergent behaviour and the appearance of information.
Beyond what is learnt about symmetry and symmetry breaking in this course, it is hoped that the concepts will challenge and change the thinking of students as they approach future subjects in their own disciplines.
One of CARMA's goals is to foster an environment which provides guidance and support for
what we might call "technical research issues". Broadly, this has meant that CARMA has used
its resources to offer its members technical capabilities which were not readily available
elsewhere, such as collaborative file-sharing, accessible "rich videoconferencing", web site
hosting and web app development, high-performance computing, research software and
visualisation tools like 3-D rendering and 3-D printing. Over the past 10 years, some of
these resources have become available from other sources, including the University of
Newcastle, and for those facilities, CARMA provides guidance about how to access and use
them, as well as for other university systems.
This talk will cover the technical services which CARMA can help you with.
This is a talk for CARMA members, and a light lunch will be served at the start. Please RSVP
for catering purposes to Juliane Turner( Juliane.Turner@newcastle.edu.au).
RHD students are particularly encouraged to attend; please pass this on to your students
if they are not already engaged with CARMA.
In many engineering problems, physical phenomena could occur at different length and time scales and they are almost impossible to be described by a single mathematical model. More importantly, in such problems, small-scale physical phenomena could dramatically change macroscopic properties of the system. Over last few decades, particle-based methods have become a powerful tool that allows to model concerned physical phenomena at any length and time scales. In this talk, I’ll introduce some widely-used particle-based methods and share some of my experience in development of particle-based mathematical models for engineering problems.
We consider an L2-gradient flow of closed planar curves whose corresponding evolution equations is sixth order. Given a smooth initial curve we show that the solution to the flow exists for all time and, provided the length of the evolving curve remains bounded, smoothly converges to a multiply-covered circle. Moreover, we show that curves in any homotopy class with initially small L3‖ks‖2 enjoy a uniform length bound under the flow, yielding the convergence result in these cases. We also give some partial results for figure-8 type solutions to the flow. This is joint work with Ben Andrews, Glen Wheeler and Valentina-Mira Wheeler.
The honeycomb toroidal graphs are a family of graphs I have been looking at now and then for thirty years. I shall discuss an ongoing project dealing with hamiltonicity as well as some of their properties which have recently interested the computer architecture community.
Finite generalised polygons are the rank 2 irreducible spherical buildings, and include projective planes and the generalised quadrangles, hexagons, and octagons. Since the early work of Ostrom and Wagner on the automorphism groups of finite projective planes, there has been great interest in what the automorphism groups of generalised polygons can be, and in particular, whether it is possible to classify generalised polygons with a prescribed symmetry condition. For example, the finite Moufang polygons are the 'classical' examples by a theorem of Fong and Seitz (1973-1974) (and the infinite examples were classified in the work of Tits and Weiss (2002)). In this talk, we give an overview of some recent results on the study of symmetric finite generalised polygons, and in particular, on the work of the speaker with Cai Heng Li and Eric Swartz.
In this talk I'll describe some recent discoveries about edge-transitive graphs and edge-transitive maps. These are objects that have received relatively little attention compared with their vertex-transitive and arc-transitive siblings.
First I will explain a new approach (taken in joint work with Gabriel Verret) to finding all edge-transitive graphs of small order, using single and double actions of transitive permutation groups. This has resulted in the determination of all edge-transitive graphs of order up to 47 (the best possible just now, because the transitive groups of degree 48 are not known), and bipartite edge-transitive graphs of order up to 63. It also led us to the answer to a 1967 question by Folkman about the valency-to-order ratio for regular graphs that are edge- but not vertex-transitive.
Then I'll describe some recent work on edge-transitive maps, helped along by workshops at Oaxaca and Banff in 2017. I'll explain how such maps fall into 14 natural classes (two of which are the classes of regular and chiral maps), and how graphs in each class may be constructed and analysed. This will include the answers to some 18-year-old questions by Širáň,
Tucker and Watkins about the existence of particular kinds of such maps on orientable and non-orientable surfaces.
An important result of X.-J. Wang states that a convex ancient solution to mean curvature flow either sweeps out all of space or lies in a stationary slab (the region between two fixed parallel hyperplanes). We will describe recent results on the construction and classification of convex ancient solutions and convex translating solutions to mean curvature flow which lie in slab regions, highlighting the connection between the two. Work is joint with Theodora Bourni and Giuseppe Tinaglia.
We present three bivariate spline approaches to the scattered data problem. The splines are defined as the minimiser of a penalised least squares functional. The penalties are based on partial differentiation operators, and are integrated using the finite element method. We apply these methods to two problems: to remove the mixture of Gaussian and impulsive noise from an image, and to recover a continuous function from a set of noisy observations. Supervisor: Bishnu Lamichhane
I will discuss my Honours work on Stabilisation of Finite Element Schemes for the Stokes Problem. In this work, we use a bi-orthogonal system in our stabilisation term. Supervisor: Bishnu Lamichhane
We investigate the regular action on a regular rooted tree induced by abelian groups satisfying property R_n. From this we construct all abelian groups satisfying property R_n when the number of children is prime. Supervisors: George Willis, Andrew Kepert
The dimer model is the finite discrete prototype for problems studied by different scientific communities. From the mathematical point of view a simple question arises which is how many dimer configurations are possible in a certain lattice geometry. Typically in the closed-packed arrangement, where the whole lattice space is covered by dimers, different types of dimers organise in a non-homogeneous manner and under certain conditions it results in a separation of phases characterised by distinct patterns of configurations.The formulation of the dimer model as an integrable two-dimensional lattice model of statistical mechanics opens the path to an investigation about the conformal properties of dimers in the continuum scaling limit. The classification of dimers as Gaussian free-field theory or Logarithmic field theory is still being debated for reasons that will be addressed and explained. This is an example of application of conformal invariance to a statistical model at criticality.
Knuth showed that a permutation can be sorted by passing it right-to-left through an infinite stack if and only if it \emph{avoids} a certain forbidden sub-pattern (231). Since then, many variations have been studies. I will describe some of these including new work of my PhD student Andrew Goh on stacks in series and ``pop-stacks".
Joris van der Hoeven and I recently discovered an algorithm that computes the product of two $n$-bit integers in $O(n \log n)$ bit operations. This is asymptotically faster than all previous known algorithms, and matches the complexity bound conjectured by Schönhage and Strassen in 1971. In this talk, I will discuss the history of integer multiplication, and give an overview of the new algorithm. No previous background on multiplication algorithms will be assumed.
Recently, second-order methods have shown great success in a variety of machine learning applications. However, establishing convergence of the canonical member of this class, i.e., the celebrated Newton's method, has long been limited to making restrictive assumptions on (strong) convexity. Furthermore, smoothness assumptions, such as Lipschitz continuity of the gradient/Hessian, have always been an integral part of the analysis. In fact, it is widely believed that in the absence of well-behaved and continuous Hessian, the application of curvature can hurt more so that it can help. This has in turn limited the application range of the classical Newton’s method in machine learning. To set the scene, we first briefly highlight some recent results, which shed light on the advantages of Newton-type methods for machine learning, as compared with first-order alternatives. We then turn our focus to a new member of this class, Newton-MR, which is derived using two seemingly simple modiﬁcations of the classical Newton’s method. We show that, unlike the classical Newton’s method, Newton-MR can be applied, beyond the traditional convex settings, to invex problems. Newton-MR appears almost indistinguishable from its classical counterpart, yet it offers a diverse range of algorithmic and theoretical advantages. Furthermore, by introducing a weaker notion of joint regularity of Hessian and gradient, we show that Newton-MR converges globally even in the absence of the traditional smoothness assumptions. Finally, we obtain local convergence results in terms of the distance to the set of optimal solutions. This greatly relaxes the notion of “isolated minimum”, which is required for the local convergence analysis of the classical Newton’s method. Numerical simulations using several machine learning problems demonstrate the great potential of Newton-MR as compared with several other second-order methods.
In the past decade, the research area of arithmetic dynamics has grown in prominence. This area considers iterated maps as dynamical systems, acting on the integers, the rationals or on finite fields (meaning there is a finite phase space in the last case). Tools used to investigate arithmetic dynamics include combinatorics, arithmetic geometry, number theory, graph theory as well as numerical experimentation. There are important applications of arithmetic dynamical systems in cryptography. I will survey some of our investigations in arithmetic dynamics which have been motivated by the order and chaos divide in Hamiltonian dynamics.
The history of projection methods goes back to von Neumann and his method of alternating projections for finding a point in the intersection of two linear subspaces. These days the method of alternating projections and its various modifications, such as the Douglas-Rachford algorithm, are successfully used to solve challenging feasibility and optimisation problems. The convergence of projection methods (and its rate) depends on the structure of the sets that comprise the feasibility problem, and also on their position relative to each other. I will survey a selection of results, focusing on the impact of the geometry of the sets on the convergence.
In this talk, I will survey some of the famous quotient algorithms that can be used to compute efficiently with finitely presented groups. The last part of the talk will be about joint work with Alexander Hulpke (Colorado State University): we have looked at quotient algorithms for non-solvable groups, and I will report on the findings so far.
In computer science, an isomorphism testing problem asks whether two objects are in the same orbit under a group action. The most famous problem of this type has been the graph isomorphism problem. In late 2015, L. Babai announced a quasipolynomial-time algorithm for the graph isomorphism problem, which is widely regarded as a breakthrough in theoretical computer science. This leads to a natural question, that is, which isomorphism testing problems should naturally draw our attention for further exploration?
The Galois group of a polynomial is the automorphism group of its splitting field. These automorphisms act by permuting the roots of the polynomial so that a Galois group will be a subgroup of a symmetric group. Using the Galois group the splitting field of a polynomial can be computed more efficiently than otherwise, using the knowledge of the symmetries of the roots. I will present an algorithm developed by Fieker and Klueners, which I have extended, for computing Galois groups of polynomials over arithmetic fields as well as approaches to computing splitting fields using the symmetries of the roots.
For linear and nonlinear dynamical systems, control problems such as feedback stabilization of target sets and feedback laws guaranteeing obstacle avoidance are topics of interest throughout the control literature. While the isolated problems (i.e., guaranteeing only stability or avoidance) are well understood, the combined control problem guaranteeing stability and avoidance simultaneously is leading to significant challenges even in the case of linear systems. In this talk we highlight difficulties in the controller design with conflicting objectives in terms of guaranteed avoidance of bounded sets and asymptotic stability of the origin. In addition, using the framework of hybrid systems, we propose a partial solution to the combined control problem for underactuated linear systems.
Calculus of variations is utilized to minimize the elastic energy arising from the curvature squared while maximizing the van der Waals energy. Firstly, the shape of folded graphene sheets is investigated, and an arbitrary constant arising by integrating the Euler–Lagrange equation is determined. In this study, the structure is assumed to have a translational symmetry along the fold, so that the problem may be reduced to a two dimensional problem with reflective symmetry across the fold. Secondly, both variational calculus technique and least squared minimization procedure are employed to determine the joining structure involved a C60 fullerene and a carbon nanotube, namely a nanobud. We find that these two methods are in reasonable overall agreement. However, there is no experimental or simulation data to determine which procedure gives the more realistic results.
We discuss various optimisation-based approaches to machine learning. Tasks include regression, clustering, and classification. We discuss frequently used terms like 'unsupervised learning,' 'penalty methods,' and 'dual problem.' We motivate our discussion with simple examples and visualisations.
We investigate the construction of multidimensional prolate spheroidal wave functions using techniques from Clifford analysis. The prolates are eigenfunctions of a time-frequency limiting operator, but we show that they are also eigenfunctions of a differential operator. In an effort to compute solutions of this operator, we prove a Bonnet formula for a class of Clifford-Gegenbauer polynomials.
The models of collective decision-making considered in this presentation are nonlinear interconnected systems with saturating interactions, similar to Hopfield newtorks. These systems encode the possible outcomes of a decision process into different steady states of the dynamics. When the model is cooperative, i.e., when the underlying adjacency graph is Metzler, then the system is characterized by the presence of two main attractors, one positive and the other negative, representing two choices of agreement among the agents, associated to the Perron-Frobenius eigenvector of the system. Such equilibria are achieved when there is a sufficiently high 'social commitment' among the agent (here interpreted as a bifurcation parameter). When instead cooperation and antagonism coexist, the resulting signed graph is in general not structurally balanced, meaning that Perron-Frobenius theorem does not apply directly. It is shown that the decision-making process is affected by the distance to structural balance, in the sense that the higher the frustration of the graph, the higher the commitment strength at which the system bifurcates. In both cases, it is possible to give conditions on the commitment strength beyond which other equilibria start to appear. These extra bifurcations are related to the algebraic connectivity of the graph.
Sea ice acts as a refrigerator for the world. Its bright surface reflects solar heat, and the salt it expels during the freezing process drives cold water towards the equator. As a result, sea ice plays a crucial role in our climate system. Antarctic sea-ice extent has shown a large degree of regional variability, in stark contrast with the steady decreasing trend found in the Arctic. This variability is within the ranges of natural fluctuations, and may be ascribed to the high incidence of weather extremes, like intense cyclones, that give rise to large waves, significant wind drag, and ice deformation. The role exerted by waves on sea ice is still particular enigmatic and it has attracted a lot of attention over the past years. Starting from theoretical knowledge, new understanding based on experimental models and computational fluid dynamics is presented. But exploration of waves-in-ice cannot be exhausted without being on the field. And this is why I found myself in the middle of the Southern Ocean during a category five polar cyclone to measure waves…
Motivating in constructing conformal field theories Jones recently discovered a very general process that produces actions of the Thompson groups $F$,$T$ and $V$ such as unitary representations or actions on $C^{\ast}$-algebras. I will give a general panorama of this construction along with many examples and present various applications regarding analytical properties of groups and, if time permits, in lattice theory (e.g. quantum field theory).
Let $t$ be the the multiplicative inverse of the golden mean. In 1995 Sean Cleary introduced the irrational-slope Thompson's group $F_t$, which is the group of piecewise-linear maps of the interval $[0,1]$ with breaks in $Z[t]$ and slopes powers of $t$. In this talk we describe this group using tree-pair diagrams, and then demonstrate a ﬁnite presentation, a normal form, and prove that its commutator subgroup is simple. This group is the first example of a group of piecewise-linear maps of the interval whose abelianisation has torsion, and it is an open problem whether this group is a subgroup of Thompson's group $F$.
A Jonsson-Tarski algebra is a set X endowed with an
isomorphism $X\to XxX$. As observed by Freyd, the category of
Jonsson-Tarski algebras is a Grothendieck topos - a highly structured
mathematical object which is at once a generalised topological space,
and a generalised universe of sets.
In particular, one can do algebra, topology and functional analysis
inside the Jonsson-Tarski topos, and on doing so, the following objects
simply pop out: Cantor space; Thompson's group V; the Leavitt algebra
L2; the Cuntz semigroup S2; and the reduced $C^{\ast}-algebra of S2. The first
objective of this talk is to explain how this happens.
The second objective is to describe other "self-similar toposes"
associated to, for example, self-similar group actions, directed graphs
and higher-rank graphs; and again, each such topos contains within it a
familiar menagerie of algebraic-analytic objects. If time permits, I
will also explain a further intriguing example which gives rise to
Thompson's group F and, I suspect, the Farey AF algebra.
No expertise in topos theory is required; such background as is
necessary will be developed in the talk.
It is commonly expected that $e$, $\log 2$, $\sqrt{2}$, among other « classical » numbers, behave, in many respects, like almost all real numbers. For instance, their decimal expansion should contain every finite block of digits from $\{0, \ldots , 9\}$. We are very far away from establishing such a strong assertion. However, there has been some small recent progress in that direction. Let $\xi$ be an irrational real number. Its irrationality exponent, denoted by $\mu (\xi)$, is the supremum of the real numbers $\mu$ for which there are infinitely many integer pairs $(p, q)$ such that $|\xi - \frac{p}{q}| < q^{-\mu}$. It measures the quality of approximation to $\xi$ by rationals. We always have $\mu (\xi) \ge 2$, with equality for almost all real numbers and for irrational algebraic numbers (by Roth's theorem). We prove that, if the irrationality exponent of $\xi$ is equal to $2$ or slightly greater than $2$, then the decimal expansion of $\xi$ cannot be `too simple', in a suitable sense. Our result applies, among other classical numbers, to badly approximable numbers, non-zero rational powers of ${{\rm e}}$, and $\log (1 + \frac{1}{a})$, provided that the integer $a$ is sufficiently large. It establishes an unexpected connection between the irrationality exponent of a real number and its decimal expansion.
I introduce and demonstrate the Coq assisted theorem prover.
The old joke is that a topologist can’t distinguish between a coffee cup and a doughnut. A recent variant of Homology, called Persistent Homology, can be used in data analysis to understand the shape of data. I will give an introduction to persistent Homology and describe two example applications of this tool.
Imagine a world, where physical and chemical laboratories are unnecessary, because all experiments can be simulated accurately on a computer. In principle this is possible, solving the quantum mechanical Schrödinger equation. Unfortunately, this is far from trivial and practically impossible for large and complex materials and reactions. In 1998, Walter Kohn and John A Pople won the Nobel Prize in Chemistry for developing the density-functional theory (DFT). DFT allows to find solutions for the Schrödinger equation much more efficiently than ab-initio and similar approaches, thus enabling the computation of materials properties in an unprecedented way. In this seminar, I will introduce quantum mechanical principles and the basic idea of the DFT. Then, I will present an example of the computational elucidation of a reaction mechanism in materials science.
This project aims to investigate algebraic objects known as 0-dimensional groups, which are a mathematical tool for analysing the symmetry of infinite networks. Group theory has been used to classify possible types of symmetry in various contexts for nearly two centuries now, and 0-dimensional groups are the current frontier of knowledge. The expected outcome of the project is that the understanding of the abstract groups will be substantially advanced, and that this understanding will shed light on structures possessing 0-dimensional symmetry. In addition to being cultural achievements in their own right, advances in group theory such as this also often have significant translational benefits. This will provide benefits such as the creation of tools relevant to information science and researchers trained in the use of these tools.
The project aims to develop novel techniques to investigate Geometric analysis on infinite dimensional bundles, as well as Geometric analysis of pathological spaces with Cantor set as fibre, that arise in models for the fractional quantum Hall effect and topological matter, areas recognised with the 1998 and 2016 Nobel Prizes. Building on the applicant's expertise in the area, the project will involve postgraduate and postdoctoral training in order to enhance Australia's position at the forefront of international research in Geometric Analysis. Ultimately, the project will enhance Australia's leading position in the area of Index Theory by developing novel techniques to solve challenging conjectures, and mentoring HDR students and ECRs.
This project aims to solve hard, outstanding problems which have impeded our ability to progress in the area of quantum or noncommutative calculus. Calculus has provided an invaluable tool to science, enabling scientific and technological revolutions throughout the past two centuries. The project will initiate a program of collaboration among top mathematical researchers from around the world and bring together two separate mathematical areas into a powerful new set of tools. The outcomes from the project will impact research at the forefront of mathematical physics and other sciences and enhance Australia's reputation and standing.
Mahler's method in number theory is an area wherein one answers questions surrounding the transcendence and algebraic independence of both power series $F(z)$, which satisfy the functional equation $$a_0(z)F(z)+a_1(z)F(z^k)+\cdots+a_d(z)F(z^{k^d})=0$$ for some integers $k\geqslant 2$ and $d\geqslant 1$ and polynomials $a_0(z),\ldots,a_d(z)$, and their special values $F(\alpha)$, typically at algebraic numbers $\alpha$. The most important examples of Mahler functions arise from important sequences in theoretical computer science and dynamical systems, and many are related to digital properties of sets of numbers. For example, the generating function $T(z)$ of the Thue-Morse sequence, which is known to be the fixed point of a uniform morphism in computer science or equivalently a constant-length substitution system in dynamics, is a Mahler function. In 1930, Mahler proved that the numbers $T(\alpha)$ are transcendental for all non-zero algebraic numbers $\alpha$ in the complex open unit disc. With digital computers and computation so prevalent in our society, such results seem almost second nature these days and thinking about them is very natural. But what is one really trying to communicate by proving that functions or numbers such as those considered in Mahler's method?
In this talk, highlighting work from the very beginning of Mahler's career, we speculate---and provide some variations---on what Mahler was really trying to understand. This talk will combine modern and historical methods and will be accessible to students.
In this talk, we will present a brief overview of mathematical diffraction of structures with no translational symmetry but are not ruled out to exhibit long-range order. We introduce aperiodic tilings as toy models for such structures and discuss the relevant measure-theoretic formulation of the diffraction analysis. In particular, we focus on the component of the diffraction that suggests stochasticity but can be non-trivial for deterministic systems, and how its absence can be confirmed using some techniques involving Lyapunov exponents and Mahler measures. This is joint work with Michael Baake, Michael Coons, Franz Gaehler and Uwe Grimm.
The problem of packing space with regular tetrahedra has a 2000 year history. This talk surveys the history of work on the problem. It includes work by mathematicians, computer scientists, physicists, chemists, and materials scientists. Much progress has been made on it in recent years, yet there remain many unsolved problems.
In this talk, I will show how to build $C^*$-algebras using a family of local homeomorphisms. Then we will compute the KMS states of the resulted algebras using Laca-Neshveyev machinery. Then I will apply this result to $C^*$-algebras of $K$-graphs and obtain interesting $C^*$-algebraic information about $k$-graph algebras. This talk is based on a joint work with Astrid an Huef and Iain Raeburn.
The KMS condition for equilibrium states of C*-dynamical systems has been around since the 1960’s. With the introduction of systems arising from number theory and from semigroup dynamics following pioneering work of Bost and Connes, their study has accelerated significantly in the last 25 years. I will give a brief introduction to C*-dynamical systems and their KMS states and discuss two constructions that exhibit fascinating connections with key open questions in mathematics such as Hilbert’s 12th problem on explicit class field theory and Furstenberg’s x2 x3 conjecture.
Using a variant of the Laca-Raeburn program for calculating KMS states, Laca, Raeburn, Ramagge and Whittaker showed that, at any inverse temperature above a critical value, the KMS states arising from self-similar actions of groups (or groupoids) $G$ are parameterised by traces on C*(G). The parameterisation takes the form of a self-mapping \chi of the trace space of C*(G) that is built from the structure of the stabilisers of the self-similar action. I will outline how this works, and then sketch how to see that \chi has a unique fixed-point, which picks out the ``preferred" trace of C*(G) corresponding to the only KMS state that persists at the critical inverse temperature. The first part of this will be an exposition of results of Laca-Raeburn-Ramagge-Whittaker. The second part is joint work with Joan Claramunt.
Zombies are a popular figure in pop culture/entertainment and they are usually portrayed as being brought about through an outbreak or epidemic. Consequently, we model a zombie attack, using biological assumptions based on popular zombie movies. We introduce a basic model for zombie infection, determine equilibria and their stability, and illustrate the outcome with numerical solutions. We then refine the model to introduce a latent period of zombification, whereby humans are infected, but not infectious, before becoming undead. We then modify the model to include the effects of possible quarantine or a cure. Finally, we examine the impact of regular, impulsive reductions in the number of zombies and derive conditions under which eradication can occur. We show that only quick, aggressive attacks can stave off the doomsday scenario: the collapse of society as zombies overtake us all.
During my study leave in 2018 I have applied nonlinear stability analysis techniques to the Douglas-Rachford Algorithm, with the aim of shedding light on the interesting non-convex case, where convergence is often observed but seldom proven. The Douglas-Rachford Algorithm can solve optimisation and feasibility problems, provably converges weakly to solutions in the convex case, and constitutes a practical heuristic in non-convex cases. Lyapunov functions are stability certificates for difference inclusions in nonlinear stability analysis. Some other recent nonlinear stability results are showcased as well.