Let $M(n)$ be the number of distinct entries in the multiplication table for integers smaller than $n$. More precisely, $M(n) := |\{ij \mid\ 0<= i,j <n\}|$. The order of magnitude of $M(n)$ was established in a series of papers by various authors, starting with Erdös (1950) and ending with Ford (2008), but an asymptotic formula for $M(n)$ is still unknown. After describing some of the history of $M(n)$ I will consider two algorithms for computing $M(n)$ exactly for moderate values of $n$, and several Monte Carlo algorithms for estimating $M(n)$ accurately for large $n$. This leads to consideration of algorithms, due to Bach (1985-88) and Kalai (2003), for generating random factored integers - integers $r$ that are uniformly distributed in a given interval, together with the complete prime factorisation of $r$. The talk will describe ongoing work with Carl Pomerance (Dartmouth, New Hampshire) and Jonathan Webster (Butler, Indiana).
Bio: Richard Brent is a graduate of Monash and Stanford Universities. His research interests include analysis of algorithms, computational complexity, parallel algorithms, structured linear systems, and computational number theory. He has worked at IBM Research (Yorktown Heights), Stanford, Harvard, Oxford, ANU and the University of Newcastle (NSW). In 1978 he was appointed Foundation Professor of Computer Science at ANU, and in 1983 he joined the Centre for Mathematical Analysis (also at ANU). In 1998 he moved to Oxford, returning to ANU in 2005 as an ARC Federation Fellow. He was awarded the Australian Mathematical Society Medal (1984), the Hannan Medal of the Australian Academy of Science (2005), and the Moyal Medal (2014). Brent is a Fellow of the Australian Academy of Science, the Australian Mathematical Society, the IEEE, ACM, IMA, SIAM, etc. He has supervised twenty PhD students and is the author of two books and about 270 papers. In 2011 he retired from ANU and moved to Newcastle to join CARMA, at the invitation of the late Jon Borwein.
The use of various methods to obtain close to optimal quantization leads to interesting questions about the behavior of random processes, Diophantine approximation, ergodic maps, shrinking targets, and other related constructions. The goal in all of these approaches to quantization is the speed of decrease of the error, coupled with the simplicity and concreteness of the process employed.
I will discuss the various completed, ongoing, and planned mathematics visualisation projects within CARMA's SeeLab visualisation laboratory.
Bio: Michael Assis was awarded a PhD in Statistical Mechanics at Stony Brook University in 2014, and then took a postdoctoral fellowship at the University of Melbourne. In 2017 he held a computational mathematics postdoctoral position within CARMA, and earlier this year he worked to develop CARMA's Seelab mathematics visualisation laboratory together with David Allingham.
There is an intriguing analogy between number fields and function fields. If we view classical Number Theory as the study of the ring of integers and its extensions, then function field arithmetic is the study of the ring of polynomials over a finite field and its extensions. According to this analogy, most constructions and phenomena in classical Number Theory, ranging from the elementary theorems of Euler, Fermat and Wilson, to the Riemann Hypothesis, Elliptic curves, class field theory and modular forms all have their function field analogues. I will give a panoramic tour of some of these constructions and highlight their similarities and differences to their classical counterparts.
This lecture should be accessible to advanced undergraduate students.
The Discrete Element Method (DEM) is a very powerful numerical method for the simulation of unbonded and bonded granular materials, such as soil and rock. One of the unique features of this approach is that it explicitly considers the individual grains or particles and all their interactions. The DEM is an extension of the Molecular Dynamics (MD) approach. The motion of the particles is governed by Newton's second law and the rigid body dynamic equations are generally solved by applying an explicit time-stepping algorithm. Spherical particles are usually used, as this results in most efficient contact detection. Nevertheless, with the increase of computing power non-spherical particles are becoming more popular. In addition, great effort is made for coupling the method with other continuum methods to model multiphase materials. The talk discusses recent developments of the DEM in Geomechanics based on the open-source framework YADE and some of its ongoing challenges.
Bio: Klaus has more than 10 years' experience in the development of cutting-edge numerical tools for geotechnical engineering and rock mechanics applications. He obtained his PhD in civil engineering from Graz University of Technology (Austria). After moving to Australia, he expanded his initial research experience on continuum-based numerical modelling with the Boundary Element Method (BEM) and Finite Element Method (FEM) by taking on the Discrete Element Method (DEM), a discontinuum-based method. He is an active developer of the open-source DEM framework YADE (https://yade-dem.org), an efficient numerical tool for the dynamic simulation of geomaterials. Lately he has been concentrating on the development of a highly innovative framework for the modelling of deformable discrete elements.
Experimental discovery has long played an important role in research mathematics, even before the advent of modern computational tools. Many methods of antiquity are familiar to all of us, including the drawing of pictures to gain geometric insights and exhaustively solving similar problems in order to identify patterns. I will share a variety modern computational tools and techniques which I have used for my research at CARMA. The contexts of the discoveries will be varied -- including number theory, non-Euclidean geometry, complex analysis, and optimization -- and so the emphasis will be on the strategies employed rather than specific outcomes.
Bio: Scott Lindstrom received his master's degree from Portland State University. In September, 2015, he came to CARMA at University of Newcastle as a PhD student of Jonathan Borwein. Following Professor Borwein's untimely passing, he has continued as a student of Brailey Sims, Heinz Bauschke, and Bishnu Lamichhane. In October he will begin a postdoctoral fellowship at Hong Kong Polytechnic University. His principal research area is experimental mathematics with particular emphasis in optimization and nonlinear convex analysis. He is a member of the AustMS special interest group Mathematics of Computation and Optimization (MoCaO) and organizes the Borwein Meetings for RHD students and postdocs at CARMA.
An enduring topic of research interest relates to the heritability of mental traits, such as intelligence. Some of the work on this topic has focussed on genetic contributions to the speed of cognitive processing, by examination of response times in psychometric tests. An important limitation of previous work is the underlying assumption that variability in response times solely reflects variability in the speed of cognitive processing. This assumption has been problematic in other domains, due to the confounding effects of caution and motor execution speed on observed response times. We extend a cognitive model of decision-making to account for the relatedness structure in a twin study paradigm. This approach has the potential to separately quantify different contributions to the heritability of response time: contributions from cognitive processing speed, caution, and motor execution speed. In some ways, this is a typical usage of an evidence accumulation model, and it throws up all the typical problems that we struggle with in data visualisation. Those problems will become evident during the talk, as we discuss data from the Human Connectome Project. We find that caution is both highly heritable and highly influenced by the environment, while cognitive processing speed is moderately heritable with little environmental influence, and motor execution speed appears to have no strong influence from either. Our study suggests that the assumption made in previous studies of the heritability being within mental processing speed is incorrect, with response caution actually being the most heritable part of the decision process.
Complex virtual environments are used for entertainment in the form of games and are also fundamental in training and simulation environments. Apart from the visual representation of reality, these environments, and the interactions occurring between users within them, are a source of a wide variety of data. These data cover interactions such as spatio-temporal positional tracking within 3D virtual environments, to the measurement of physiological responses of users to in game events. Of particular interest are measures of visual complexity, and how these measures might be useful in determining minimum realism for affective virtual environments. This talk will consider these different data types and sources and highlight some active research areas in the analysis and visualisation of this data.
About the speaker: Dr Karen Blackmore is a Senior Lecturer in Computing at the School of Electrical Engineering and Computing, The University of Newcastle, Australia. She received her BIT (Spatial Science) With Distinction and PhD (2008) from Charles Sturt University, Australia. Dr Blackmore is a spatial scientist with research expertise in the modelling and simulation of complex social and environmental systems. Her research interests cover the use of agent-based models for simulation of socio-spatial interactions, and the use of simulation and games for serious purposes. Her research is cross-disciplinary and empirical in nature, and extends to exploration of the ways that humans engage and interact with models and simulations. Before joining the University of Newcastle, Dr Blackmore was a Research Fellow in the Department of Environment and Geography at Macquarie University, Australia and a Lecturer in the School of Information Technology, Computing and Mathematics at Charles Sturt University.
In teaching mathematics, we are interested in improving students' understanding of core concepts. Students enter our classrooms as relative novices in their understanding of mathematics and one of our goals is to help them build expert understanding of mathematics. This presents us with two related problems: (1) creating effective teaching strategies designed to evolve novice thinking to expert thinking, and (2) designing and validating measures capable of assessing whether different teaching interventions improve students' conceptual understanding of mathematics. Many usual approaches to these problems make use of scoring rubrics for student work. I will discuss an experiment that highlights some of the difficulties of using scoring rubrics for this work, and then I will present an alternative approach to these problems that makes use of the law of comparative judgment, which is based on the principle of that humans are better at comparing two things against one another than they are at comparing one thing against a set of criteria (Thurstone's Law of Comparative Judgment, 1927). As part of this presentation, I will demonstrate ComPAIR, a new online tool for supporting student learning with peer feedback. ComPAIR was co-developed with a group of colleagues from the Faculty of Science, the Faculty of Arts, and the Centre for Teaching and Learning Technology at the University of British Columbia.
Constructive methods for the controller design for dynamical systems subject to bounded state constraints have only been investigated by a limited number of researchers. The construction of robust control laws is significantly more difficult compared to unconstrained problems due to the necessity of discontinuous feedback laws. A rigorous understanding of the problem is however important in obstacle or collision avoidance for mobile robots, for example. In this talk we present preliminary results on the controller design for obstacle avoidance of linear systems based on the notation of hybrid systems. In particular, we derive a discontinuous feedback law, globally stabilizing the origin while avoiding a neighborhood around an obstacle. In this context, additionally an explicit bound on the maximal size of the obstacle is provided.
The lattice Boltzmann method is used to carry out a direct numerical simulation of laminar and turbulent flows in a smooth and rough wall channel or pipe at critical and subcritical Reynolds number. The velocity field is solved using the Lattice Boltzmann Method (LBM) as an alternative numerical approach to computational Fluid dynamics. The method is successfully used to simulate more complex fluid dynamics such as thermal transportation, jet flows, electrokinetic flows and so on. The basic idea of LBM is to construct a simplified kinetic model that incorporates the essential physics of microscopic average properties, which obey the desired Navier-Stokes equations. The computation and visualization will be discussed in this seminar.
About the speaker: Dr Nisat Nowroz Anika completed her Bachelor and MSc in Applied Mathematics from Khulna University in Bangladesh at the year of 2011 and 2013 respectively. She is currently undertaking a Ph.D. in Mechanical Engineering at the University of Newcastle under the supervision of Professor Lyazid Djenidi. The major focus of her research is mixing at low Reynolds number by generating turbulence.
The late Professor Jonathan Borwein was fascinated by the constant
$\pi$. Some of his talks on this topic can be found on the CARMA website.
This homage to Jon is based on my talk at the Jonathan Borwein Commemorative
Conference. I will describe some algorithms for the high-precision
computation of $\pi$ and the elementary functions, with particular reference
to the book Pi and the AGM by Jon and his brother Peter Borwein.
Here "AGM" is the arithmetic-geometric mean
of Gauss and Legendre. Because the AGM has second-order convergence, it
can be combined with FFT-based fast multiplication algorithms to give fast
algorithms for the \hbox{$n$-bit} computation of $\pi$.
I will survey a few of the results and algorithms that were of interest to
Jon. In several cases they were either discovered or improved by him. If
time permits, I will also mention some new results that would have been of
interest to Jon.
The finite element method has become the most powerful approach in solving partial differential equations arising in modern engineering and physical applications. We present computation and visualisation of the solutions of some applied partial differential equations using the finite element method for most of our examples. Our examples come from solid and fluid mechanics, image processing and heat conduction in sliding meshes.
About the speaker: Dr Lamichhane was awarded the MSc in Industrial Mathematics from the University of Kaiserslautern in 2001, and the PhD in Mathematics from the University of Stuttgart in 2006. He took a postdoctoral fellow at the Australian National University in 2008 and is now a senior lecturer at the University of Newcastle. Dr Lamichhane’s main interests are numeral analysis, differential equations and applied mathematics and his recent research focus is on the approximation of solutions of partial differential equations using the finite element method.
Multi-objective optimization provides decision-makers with a complete view of the trade-offs between their objective functions that are attainable by feasible solutions. Since many problems can be formulated as integer programs, the development of efficient and reliable multi-objective integer programming solvers may have significant benefits for problem solving in industry and government. However, the conjunction of multiple objectives and integrality yields problems that can be challenging to solve. So, this talk provides an overview of a few new exact as well as heuristic algorithms for this class of optimization problems. In particular, the talk focuses on computing the nondominated frontier and also the problem of optimization over the frontier. It is worth mentioning that all of the algorithms and their corresponding open-source software packages are developed in Multi-Objective Optimization Laboratory at the University of South Florida.
The rapid increase in available information has led to many attempts to automatically locate patterns in large, abstract, multi-attributed information spaces. These techniques are often called data mining and have met with varying degrees of success. An alternative approach to automatic pattern detection is to keep the user in the exploration loop by developing displays that enhance their natural sensory abilities to detect patterns. This approach, whether visual, auditory, or touch based, can assist a domain expert to search their data for useful relationships. However, designing models of the abstract data and defining appropriate sensory mappings are critical tasks in building such a system. Intuitive multi-sensory displays (visual, auditory, touch) of abstract data are difficult to design and the process needs to carefully consider human perceptual and cognitive abilities. This talk will introduce a taxonomy that helps designers consider the range of sensory mappings, along with appropriate guidelines, when building such multisensory displays. To illustrate this process a case study in the domain of stock market data is also presented.
About the speaker: Keith completed his Bachelor's degree in Mathematics at Newcastle University in 1988 and his Masters in Computing in 1993. Between 1989-1999, Keith worked on applied computer research for BHP Research. His PhD examined the design of multi-sensory displays for stock market data and was completed at Sydney University in 2003. His work has received international recognition, being selected among the best visualisations and consequently exhibited at a number of international locations and reviewed in the prestigious journal Science. In 2007 he completed a post-doctoral year in Boston working at the New England Complex Systems Institute visualising health related data. He has expertise in the fields of Human Interface Design, Computer Games, Virtual Reality, Immersive Analytics, and the theory of Perception and Cognition related to the design of multi-sensory user interfaces. Keith currently works in the school of School of Electrical Engineering and Computing at the University of Newcastle, Australia where he teaches Computer Games and Programming. While his background is in Computer Science, he has also exhibited his paintings in 11 exhibits and provided lyrics for 5 CDS and a musical. You can find more about his art and science at www.knesbitt.com.
Expanding the 1993 paper by Hohn and Skoruppa, and a brief exploration of Mahler measure optimal conditions.
About the speaker: Elijah Moore is a summer research student under the supervision of Wadim Zudilin.
The human brain is still one of the most powerful and at the same time most energy efficient computers. Artificial neural networks (ANN) are inspired by their biological counterparts and the workings of biological nervous systems. ANNs were among the most popular machine learning algorithms in the 1980-90s. However, after 2000 other algorithms came to be regarded as more accurate and practical. In 2012 ANNs came back with a big bang: a new form of biologically-inspired ANNs, deep convolutional neural networks, showed surprisingly good performances on image classification and object detection tasks, far superior to all other methods available. Since then deep networks have breaken records in many application domains, from object detection for autonomous vehicles to playing the game of Go and skin health diagnostics. Deep networks are currently revolutionising machine learning in academia and industry. They can be regarded as the most disruptive technology in any industry that involves machine learning, artificial intelligence, pattern recognition, data mining or control. This seminar aims at providing an overview of ANNs - old and new - with a special view towards how visualisations could help to explain how they work.
About the speaker: Stephan Chalup (Ph.D., Dipl.-Math.) is an associate professor at the University of Newcastle in Australia, where he is leading the Interdisciplinary Machine Learning Research Group and the Newcastle Robotics Lab. He studied mathematics with neuroscience at the University of Heidelberg and completed his Ph.D. in Computing Science at the Machine Learning Research Centre at Queensland University of Technology (QUT) in 2002. Stephan has published 100 research articles and is on the editorial boards of several journals. He is member of the University of Newcastle's Priority Research Centre CARMA.
Groups of rooted tree automorphisms, and (weakly) branch groups in particular, have received considerable attention in the last few decades, due to the examples with unexpected properties that they provide, and their connections to dynamics and automata theory. These groups also showcase interesting phenomena in profinite group theory. I will discuss some of these and other profinite completions that one can use to study these groups, and how to find them. All these concepts will be defined in the talk.
This presentation will discuss the megatrends, both technological and societal, that are impacting the modern supply chain. In particular, the balance between people and machines will be explored in the context of future of work within supply chains. What are the appropriate roles for robotics within the supply chain of the future? What is the future for people in the supply chain? Examples of existing and emerging technologies will be presented to show that the future supply chain is close at hand.
The Chebyshev conjecture is a 59-year-old open problem in the fields of analysis, optimisation, and approximation theory, positing that Chebyshev subsets of a Hilbert space must be convex. Inspired by the work of Asplund, Ficken and Klee, we investigate an equivalent formulation of this conjecture involving Chebyshev subsets of the unit sphere. We show that such sets have superior structure and use the Radon-Nikodym Property to extract some local structural results about such sets.
We present $h$ and $p$-versions of the time domain boundary element method for boundary and screen problems for the wave equation in $\mathbb{R}^3$. First, graded meshes are shown to recover optimal approximation rates for solution in the presence of edge and corner singularities on screens. Then an a posteriori error estimate is presented for general discretizations, and it gives rise to adaptive mesh refinement procedures. We also discuss preliminary results for $p$ and $hp$-versions of the time domain boundary element method. Numerical experiments illustrate the theory. Joint with H. Gimperlein and D. Stark, Heriot-Watt University, Edinburgh.
One of the most contentious areas in Indigenising Curriculum is the Maths and Sciences. This presentation considers how Maths and Statistics can provide a solid and meaningful response to the Indigenising imperative that will fulfil the two criteria of socially just education:
Suggestions on both content areas and student recruitment, retention and success will be discussed. Examples will be based on the presenters experiences as cultural facilitators in education from Foundations to the tertiary sector.
Associate Professor Kathy Butler and Ms Tammy Small are employed in the Office of the Pro Vice-Chancellor Indigenous Education and Research at the University of Newcastle. With Professor Steve Larkin, Tammy and Kathy are currently examining ways for the University to provide cultural competency training as a whole-of-university initiative.
This presentation is to assist academics consider how to adapt programs and course content and delivery to incorporate, be mindful of and better appeal to people with Indigenous backgrounds and interests.
We consider variations on the commutative diagram consisting of the Fourier transform, the Sampling Theorem and the Paley-Wiener Theorem. We start from a generalization of the Paley-Wiener theorem and consider entire functions with specific growth properties along half-lines. Our main result shows that the growth exponents are directly related to the shape of the corresponding indicator diagram, e.g., its side lengths. Since many results from sampling theory are derived with the help from a more general function theoretic point of view (the most prominent example for this is the Paley-Wiener Theorem itself), we motivate that a closer examination and understanding of the Bernstein spaces and the corresponding commutative diagrams can—via a limiting process to the straight line interval [−A,A]—yield new insights into the Lp(R)-sampling theory. This is joint work with Gunter Semmler, Technische Universität Bergakademie Freiberg, Germany.
Schoenberg’s polynomial cardinal B-splines of order $n$ provide a family of compactly supported $C^{n-2}$-functions. We present several generalizations of these B-splines, discuss their properties, and relate them to fractional difference and differentiation operators. Potential applications are mentioned.
Given a sequence of integers, one would like to understand the pattern which generates the sequence, as well as its asymptotics. If the sequence is viewed as the coefficients of the series expansion of a function, called its generating function, many questions regarding the sequence can be answered more easily. If the generating function satisfies a linear ODE or a nonlinear algebraic DE, the differential equation can be found if enough terms in the sequence are given. In this talk I'll discuss my implementation in C of such a search, applications, and a systematic search of the entire Online Encyclopedia of Integer Sequences (OEIS) for generating functions.
I will present a brief survey of some recent results that deal with the characterization of hyperbolic dynamics in terms of the existence of appropriate Lyapunov functions. The main novelty of these results lies in the fact that they consider noninvertible and infinite-dimensional dynamics. This is a joint work with L. Barreira, C. Preda and C. Valls.
In this talk I will briefly introduce the mixed finite element method and show their applications. I consider Poisson, elasticity, Stokes and biharmonic equations for the applications of the mixed finite element method. The mixed finite element method also arises naturally in Stokes flow, multi-physics problems as well as when we consider non-conforming discretisation techniques. I will also present my recent works on the mixed finite element method for biharmonic and Reissner-Mindlin plate equations.
Colour images are represented by functions of 2 variables that output 3 variables, and analysing them requires tools that can handle these dimensions. One method is to use Clifford Algebras and their recently discovered Fourier Transform. We prove the Clifford Fourier Transform has a Hardy Space, and that it's Paley-Wiener Space and Bernstein Spaces are identical. Another method is to find 2 dimensional wavelets that are non-separable. We achieve this through the use of the Douglas-Rachford Projection Algorithm, and hope to achieve it through the use of Proximal Alternating Linear Methods. This talk briefly overviews these methods and the path to completion.
I will discuss how to relate regular origami tilings to vertex models in statistical mechanics. The Miura-ori origami pattern has found many uses in engineering as an auxetic metamaterial. I analyze the effect of crease assignment defects on the long-range order properties of the Miura-ori and 4 other foldable lattices. These defects are known to affect the material's compressibility properties, so my exact results help to understand how easy it is to tune an origami metamaterial to have desired compressibility properties by introducing a set density of defects. I have found that certain origami patterns are more easily tunable than others, and conversely, the long-range ordering of some are more stable with respect to defect formation. I have also found analytical expressions for the locations of phase transition points with respect to crease assignment ordering as well as layer ordering.
The aim of this workshop is to bring together the world's foremost experts on the theory of semigroups and their relationships to other fields of mathematics such as operator algebras and totally disconnected locally compact groups. This workshop will allow the international leaders in the field to come to Australia to teach young Australian ECRs, and to forge new collaborations with Australian mathematicians.
Details are available on the conference website.
Operator algebras associated to semigroups can be traced back to a famous theorem of Coburn from the 1960s. The theory has recently been reinvigorated through Xin Li's construction of semigroup C*-algebras. Li's construction has introduced new and interesting classes of C*-algebras, which have deep connections to number theory and dynamical systems. One connection that will be thoroughly explored through this meeting is that to the representation theory of totally disconnected locally compact groups.
This presentation will outline my research into fitness for purpose of tertiary algebra textbooks used in Iraq in the teaching of undergraduate algebra courses with regard to the training of pre-service teachers. The project draws on work done in textbook analysis, and work done into the teaching and learning of abstract algebra and the nature of proof.
It is well recognised that for many students learning abstract algebra and the nature of proof is difficult (Selden, 2010). Courses in abstract algebra are central to many tertiary pre-service mathematics teacher programs, including in Iraq. Capaldi (2012) suggests that abstract algebra textbooks can lay the foundation for a course and greatly influence student understanding of the material. However, it has been found that there can be large differences in textbooks used, at the school level at least, in different cultures. (Alajmi A. H., 2012, Fan and Zhu, 2007, Pepin and Haggarty, 2001). For instance, Mayer and Sims, (1995) Japanese texts feature many more worked out examples than texts used in the United States for mathematics.
I will be examining the textbooks in light of theories by Harel and Sowder, and Stacey and Vincent, regarding types and proof and modes of reasoning (Stacey and Vincent, 2009) and Capaldi (2012) regarding reader's relationships with books.
The textbooks will also be examined to try to infer the underlying assumptions about pedagogies and knowledge made by the author(s). Baxter-Magolda's theory, linking forms of assessment to underlying theories of knowledge (Baxter-Magolda, 1992) will be helpful in this pursuit.
The theory of minimal surfaces (a.k.a. soap films) goes back to Euler’s discovery in 1741 that the catenoid is area-minimising. It is still a remarkably vibrant area of research. I will describe recent joint work with Franc Forstneric of the University of Ljubljana, Slovenia. We assemble all minimal surfaces with a given shape into a space. It is an infinite-dimensional space. What does it look like? We have been able to determine its "rough shape". I will explain what we mean by "rough shape" and describe the ingredients from complex analysis, differential topology, and homotopy theory that go into our result.
Lyapunov's second or direct method provides an easy-to-check sufficient condition for stability properties of equilibria. The converse question - given a stability property, does there exist an appropriate Lyapunov function? - has been fundamental in differentiating and classifying different stability properties, particularly with regards to "uniform" stability.
In this talk, I will review the usual textbook definitions for Lyapunov functions for time-varying systems and describe where they are deficient. Some interesting new sufficient (and probably necessary) conditions pop up along the way.
The research interest in pattern avoiding permutations is inspired by Donald Knuth’s work in stack-sorting. According to Knuth, a permutation can be sorted by passing through a single infinite stack if and only if it avoids a sub-permutation pattern 231. Murphy extended Knuth’s research by using two infinite stacks in series and found out that the basis for generated permutations is infinite but Elder proved that the basis is finite when one of the stack is limited to depth two and the permutations are algebraic. My research is to investigate the permutations generated by a stack of depth 3 and an infinite stack in series. It is to determine the basis and nature of the permutations in term of formal language.
We determine the Borel complexity of the topological isomorphism problem for profinite, t.d.l.c., and Roelcke precompact non-Archimedean groups, by showing it is equivalent to graph isomorphism.
For oligomorphic groups we merely establish this as an upper bound.
Joint work with Kechris and Tent.
Control Lyapunov functions (CLFs) for the control of dynamical systems have faded from the spotlight over the last years even though their full potential has not been explored yet. To reactivate research on CLFs we review existing results on Lyapunov functions and (nonsmooth) CLFs in the context of stability and stabilization of nonlinear dynamical systems. Moreover, we highlight open problems and results on CLFs for destabilization. The talk concludes with ideas on Complete CLFs, which combine the concepts of stability and instability. The results presented in the talk are illustrated and motivated on the examples of a nonholonomic integrator and Artstein's circles.
Part of my 2016 SSP included completion of a semi-historical review on the mathematics of W.N. Bailey, a familiar name in some combinatorics circles in relation with the "Bailey lemma" and "Bailey pairs." My personal encounters with the mathematician from the first half of the 20th century were somewhat different and more related to applications of special functions to number theory—the subject Bailey had never dealt with himself. One motivation for my writing was the place where I spent my SSP—details to be revealed in the talk. There will be some formulas displayed, sometimes scary, but they will serve as a background to historical achievements. Broad audience is welcome.
In this talk we discuss a new approach for the Hamilton cycle problem (HCP). The HCP is one of the classical problems in combinatorial mathematics. It can be stated as given a graph G, find a cycle that passes through every single vertex exactly once, or determine that such a cycle does not exist. In 1994, Filar and Krass developed a new model for HCP by embedding this problem into a Markov decision process. This approach was the motivation of a new line research which was extended by several other people afterwards. In this approach, a new polytope corresponding to a given graph G was constructed and searching for Hamiltonian cycles in a given Hamiltonian graph G was converted to searching for particular extreme points (called Hamiltonian extreme points) among extreme points of that polytope. In this research, we are going to design a Markov chain with certain properties to sample Hamiltonian extreme points of that polytope. More precisely, we would like to study a specific class of input graphs, the so-called random graphs. Some preliminary theoretical results are presented in this talk.
In this talk we consider a class of monotone operators which are appropriate for symbolic representation and manipulation within a computer algebra system. Various structural properties of the class (e.g., closure under taking inverses, resolvents) are investigated as well as the role played by maximal monotonicity within the class. In particular, we show that there is a natural correspondence between our class of monotone operators and the subdifferentials of convex functions belonging to a class of convex functions deemed suitable for symbolic computation of Fenchel conjugates which were previously studied by Bauschke & von Mohrenschildt and by Borwein & Hamilton. A number of illustrative computational examples utilising the introduced class of operators will be provided including computation of proximity operators, recovery of a convex penalty function associated with the hard thresholding operator, and computation of superexpectations, superdistributions and superquantiles with specialization to risk measures.
I am going to look at three unsolved graph theory problems for which the same family of graphs presents a barrier to either solving or making substantial progress on the problems. The graphs in this family are called honeycomb toroidal graphs. The three problems are not closely related.
Problem solving, communication and information literacy are just a few graduate attributes that employers value, yet it commonly appears that students upon graduating show only limited improvement in these areas. For instance, 3rd year students can still be thrown by relatively simple unfamiliar problems, even after working actively on numerous related exercises and problems throughout their degree. I will discuss some of the things I have implemented in my teaching to specifically target the development of student graduate attributes. My experience is with teaching mathematics, physics and engineering students, however much of my discussion will be non-discipline-specific.
In a way, mathematics can be seen as a language game, where we use symbols, together with some rewriting rules, to represent objects we are interested in and then ask what can be said about the sequences of symbols (languages) that capture certain phenomena. For example, given a group G with generators a and b, can we recognise (using a computer) the sequences of generators that correspond to non-trivial elements of G? If yes, how strong computer do we need, i.e. how complicated is the language we are studying?
There is a natural duality between various types of computational models and classes of languages that can be recognised by them. Until recently most problems/languages in group theory were classified within the Chomsky hierarchy, but there are more computational models to consider. In the talk I will briefly introduce L-systems, a family of classes of languages originally developed to model growth of algae, and show that the co-word problem in Grigorchuk's group, a group of particularly nice transformations of infinite binary tree, can be seen as a language corresponding to a fairly simple L-system.
Totally disconnected, locally compact (t.d.l.c.) groups are a large class of topological groups that arise from a few different sources, for instance as automorphism groups of a range of algebraic and combinatorial structures, or from the study of isomorphisms between finite index subgroups of a given group. A general theory has begun to emerge in recent years, based on the interaction between small-scale and large-scale structure in t.d.l.c. groups. I will give a survey of some ways in which these groups arise and some of the tools that have been developed for understanding them.
In recent joint work on equilibrium states on semigroup C*-algebras with Afsar, Brownlowe, and Larsen, we discovered that the structure of equilibrium states admits an elegant description in terms of substructures of the original semigroup. More precisely, we consider two almost contrary subsemigroups and related features to obtain a unifying picture for a number of predating case studies. Somewhat surprisingly, all the examples from the case studies satisfy a list of four abstract properties (and are then called admissible). The nature and presence of these properties is yet to be fully understood. In this talk, I will focus on a class of examples arising as Zappa-Szép products of right LCM semigroups which showcases some interesting features. No prerequisites in operator algebras are required to follow this talk.
This talk gives an outline of (mostly unfinished) work done collaboratively while on sabbatical in semester 2 last year. Join me as we travel through the USA, Germany, Belgium and Austria. Your guide will share off-the-beaten-track highlights such as quaternionic splines, prolate shift systems, higher-dimensional Hardy, Paley-Wiener and Bernstein spaces, the Clifford Fourier transform, multidimensional prolates, and a Jon Borwein-inspired optimization-based approach to the construction of multidimensional wavelets. Breakfast not included.
In this talk, we discuss a new approach to demand forecasting in supply chains. Demand forecasting is an inevitable task in supply chain management. Due to the endogenous and exogenous factors impact a supply chain, the regime of the supply chain may vary significantly. Such changes in the regime can bring a high volatility to demand time series and consequently, a single statistical model may not suffice to forecast the demand with a desirable level of precision. We develop a nexus between stochastic processes and statistical models to forecast the demand in supply chains with regime switching. The preliminary results on real world time series data sets are promising.
In this talk, I will describe the conditional value at risk (CVaR) measure used in modelling risk aversion in decision making problems.
CVaR is a highly consistent risk measure for modelling risk aversion.
I will then present two applications of CVaR. The first application considers all problems that are representable by decision trees. In this application, I show that these problems under the CVaR criterion can be solved efficiently by solving a linear program. In the second application, I consider a basic problem in the area of production planning with random yield. For this problem, I present a risk aversion model. The model is nonconvex. I present an efficient locally optimal solution method and then provide a sufficient optimality condition.
There is to date no overarching classification theorem for C*-algebras, which means the theory of C*-algebras is an example-driven field of mathematics. Perhaps the most important class of examples are group C*-algebras, which are as old as the field itself. An analogous construction of C*-algebras associated to semigroups has been an active area of research among operator algebraists since Coburn’s Theorem regarding the universality of the C*-algebra generated a single isometry appeared in the 1960s. In July this year, Newcastle will host the AMSI/AustMS sponsored event "Interactions between operator algebras and semigroups". In this talk I will give a gentle introduction to the theory of semigroup C*-algebras and perhaps it will convince some of you to come along and take part in the meeting.
Gamiﬁcation refers to the use of elements of games in non-game contexts and has been applied in workplace, marketing, health programs and other areas, with mounting evidence of increased interest, involvement, satisfaction and performance of the participants. More recently gamiﬁcation has been emerging as a teaching method that has a great potential to improve students’ motivation and engagement. Gamiﬁcation in education should not be confused with playing educational games, as it only uses concepts such as points, leader boards, etc, rather than computer games themselves. In this talk we describe the gamiﬁcation of a theoretical computer science course we performed in 2014/2015/2016 as well as our experience with two other STEM courses.
The talk will include general information on the current state of plasma fusion as an energy source and some more detailed aspects of this research area.
In 1975, culminating more than 40 years of published work by Paul Erdos on the problem, he and John Selfridge proved that the product of consecutive integers cannot be a nonzero perfect power. Their proof was a remarkable combination of elementary and graph theoretic arguments. Subsequently, Erdos conjectured that this result can be generalized to a product of consecutive terms in an arithmetic progression, under certain basic assumptions. In this talk, we discuss joint work with Samir Siksek in the direction of proving Erdos' conjecture. Our approach is via techniques based upon the modularity of Galois representations, bounds for the number of supersingular primes for elliptic curves, and analytic estimates for Dirichlet character sums.
In this talk, we discuss our new approach to design reverse logistics models for dairy industries, in particular whey products. Whey is a by-product of cheese making with many applications spanning from dairy and meat to pharmaceuticals. We develop a hierarchical location-routing model for a whey recovery network design. In this class of models, the location and routing decisions are made simultaneously. As the problem is NP-hard, it may not be possible to solve even the small-size instances efficiently. We suggest different approaches such as adding valid inequalities and improving lower and upper bounds to solve the problem in a reasonable amount of time.
Mathscraft is a workshop for junior high school students that aims to give them the experience of doing maths the way research mathematicians do. It is coordinated and sponsored by the ARC Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS), and sessions are conducted by Anthony Harradine, (Prince Alfred College, Adelaide).
In a Mathscraft session there are up to 10 groups, each comprising three students (years 7-10), one teacher and one mathematician. The teams are given mathematical problems and are guided through a problem-solving process. The problems and the process are designed to mimic the mathematics that is done by research mathematicians - exploring, noticing patterns, making conjectures, proving them, figuring out why, and thinking of ways to extend the problem.
In this talk I'll describe the design of the problems and process (with examples), and explain the motivations behind them. I'll also talk about a Professional Development workshop that we ran for teachers in November last year, which had the aim of training them to run Mathscraft sessions in their own local areas. This workshop was sponsored by ACEMS and MATRIX.
Jonathan Michael Borwein (20 May 1951 - 2 Aug 2016) had many talents, among which were his abilities to make discoveries in mathematics, to seek tenaciously for proofs of these, and to do both of those things in collegial concert with other workers. In this colloquium I shall give three examples of situation in which I had the pleasure of seeing those talents in action. They concern multiple zeta values, walks on lattices, and modular forms. In each case I shall give a notable identity, comment on its proof, and indicate further work that was provoked by the discovery. The identities in question are chosen to be comprehensible to anyone with an undergraduate education in mathematics and also to people, like myself, who lack that particular qualification.
A graph labeling is an assignment of integers to the vertices or edges, or both, subject to certain conditions. These conditions are usually expressed by on the basis of the weights of some evaluating function. Based on these conditions there are several types of graph labelings such as graceful, magic, antimagic, sum and irregular labeling. In this research, we looking at the H-supermagic labeling of firecracker, banana tree, flower and grid graphs; the exclusive sum labelling of trees; and the edge irregularity strength of the grid graphs.
Incremental stability describes the asymptotic behavior between any two trajectories of dynamical systems. Such properties are of interest, for example, in the study of observers or synchronization of chaos. In this paper, we develop the notions of incremental stability and incremental input-to-state stability (ISS) for discrete-time systems. We derive Lyapunov function characterizations for these properties as well as a useful summation-to-summation formulation of the incremental stability property.
I will discuss how to solve free group equations using a practical computer program. Ciobanu, Diekert and Elder recently gave a theoretical algorithm which runs in nondeterministic space $n\log n$, but implementing their method as an actual computer program presents many challenges, which I will describe.
Some Engel words and also commutators of commutators can be expressed as products of powers. I discuss recent work of Colin Ramsay in this area, using PEACE (Proof Extraction After Coset Enumeration), and in particular provide expressions for commutators of commutators as short products of cubes.
The Australian Council on Healthcare Standards collates data on measures of performance in a clinical setting in six-month periods. How can these data best be utilised to inform decision-making and systems improvement? What are the perils associated with collecting data in six-month periods, and how may these be addressed? Are there better ways to analyse, report and guide policy?
The Council for Aid to Education is one of many organisations internationally attempting to assess tertiary institutional performance. Value-add modelling is a technique intended to inform system performance. How valid and reliable are these techniques? Can they be improved?
Educational techniques and outreach activities are employed across the education system and the wider community for the purposes of increasing access, equity and understanding.
When new concepts are formed, a well-designed instrument to assess and provide evidence of their performance is required. Does immersion in professional experience activity enable pre-service teachers to achieve teaching standards? Do engagement activities for schools in remote and rural areas increase students’ aspirations and engagement with tertiary institutions?
Forensic anthropologists deal with the collection of bones and profiling individuals based on the remains found. How can statistics inform such decision-making?
Such questions and existing and potential answers will be discussed in the context of research collaborations with Taipei Medical University (Taiwan), Health Services Research Group, Australian Council on Healthcare Standards, Hunter Medical Research Institute, School of Education, Wollotuka Institute, School of Environmental Sciences and a Forensic Anthropologist.
A challenge with our large-enrolment courses is to manage assessment resources: questions, quizzes, assignments and exams. We want traditional in-class assessment to be easier, quicker and more reliable to produce, in particular where multiple versions of each assessment is required. Our approach is to
We have implemented this within standard software: LaTeX, Ruby, git, and our favourite mathematics software.
We will briefly show off our achievements in 2016, including new features of the software and how we've used them in our teaching. We then invite discussion on we we can do to help our colleagues use these tools.
For a two-coloring of the vertex set of a simple graph $G$ consider the following color-change rule: a red vertex is converted to blue if it is the only red neighbor of some blue vertex. A vertex set $S$ is called zero-forcing if, starting with the vertices in $S$ blue and the vertices in the complement $V \setminus S$ red, all the vertices can be converted to blue by repeatedly applying the color-change rule. The minimum cardinality of a zero-forcing set for the graph $G$ is called the zero-forcing number of $G$, denoted by $Z(G)$.
There is a conjecture connecting zero forcing number, minimum degree $d$ and girth $g$ as follows: "If G is a graph with girth $g \geq 3$ and minimum degree $d \geq 2$, then $Z(G) \geq d+ (d-2)(g-3)$".
I shall discuss a recent paper where the conjecture is proved to be true for all graphs with girth \leq 10.
Targeted Audience: All early career staff and PhD students; other staff welcome
Abstract: Many of us have been involved in discussions revolving around the problem of choosing suitable thesis topics and projects for post-graduate students, honours students and vacation research students. The panel is going to present some ideas that we hope people in the audience will find useful as they get ready for or continue with their careers.
About the Speakers: Professor Brian Alspach has supervised thirteen PhDs, twenty-five MScs, nine post-doctoral fellows and a dozen undergraduate scholars over his fifty-year career. Professor Eric Beh has 20 years' international experience in the analysis of categorical data with a focus on data visualisation. He has and has, or currently is, supervised about a 10 PhD students. Dr Mike Meylan has twenty years research experience in applied mathematics both leading projects and working with others. He has supervised 5 PhD students and three post-doctoral fellows.
Today's discrete mathematics seminar is dedicated to Mirka Miller. I am going to present the beautiful Hoffman-Singleton (1964) paper which established the possible values for valencies for Moore graphs of diameter 2, gave us the Hoffman-Singleton graph of order 50, and gave us one of the intriguing still unsettled problems in combinatorics. The proof is completely linear algebra and is a proof that any serious student in discrete mathematics should see sometime. This is the general area in which Mirka made many contributions.
In this talk I will present a class of C*-algebras known as "generalised Bunce-Deddens algebras" which were constructed by Kribs and Solel in 2007 from directed graphs and sequences of natural numbers. I will present answers to questions asked by Kribs and Solel about the simplicity and the classification of these C*-algebras. These results are from my PhD thesis supervised by Dave Robertson and Aidan Sims.
This afternoon (31 October) we shall complete the discussion about vertex-minimal graphs with dihedral automorphism groups. I have attached an outline of what was covered in the first two weeks.
Learning to rank is a machine learning technique broadly used in many areas such as document retrieval, collaborative filtering or question answering. We present experimental results which suggest that the performance of the current state-of-the-art learning to rank algorithm LambdaMART, when used for document retrieval for search engines, can be improved if standard regression trees are replaced by oblivious trees. This paper provides a comparison of both variants and our results demonstrate that the use of oblivious trees can improve the performance by more than 2:2%. Additional experimental analysis of the inuence of a number of features and of a size of the training set is also provided and confirms the desirability of properties of oblivious decision trees.
About the Speaker: Dr Michal Ferov is a Postdoctoral Research Fellow in the School of Mathematical and Physical Sciences,Faculty of Science and Information Technology.
I am studying the complexity of solving equations over different algebraic objects, like free groups, virtually free groups, and hyperbolic groups. We have an NSPACE(n log n) algorithm to find solutions in free groups, which I will try to briefly explain. Applications include pattern recognition and machine learning, and first order theories in logic.
Maintenance plays a crucial role in the management of rail infrastructure systems as it ensures that infrastructure assets (e.g., tracks, signals, and rail crossings) are in a condition that allows safe, reliable, and efficient transport. An important and challenging problem facing planners is the scheduling of maintenance activities which must consider the movement and availability of the maintenance resources (e.g., equipment and crews). The problem can be viewed as an inventory routing problem (IRP) in which vehicles deliver product to customers so as to ensure that the customers have sufficient inventory to meet future demand. In the case of rail maintenance, the customers are the infrastructure assets, the vehicles correspond to the resources used to perform the maintenance, and the product that is in demand, the inventory of which is replenished by the vehicle, is the asset condition. To the best of our knowledge, such a viewpoint of rail maintenance has not been previously considered.
In this thesis we will study the IRP in the rail maintenance scheduling context. There are several important differences between the classical IRP and our version of the problem. Firstly, we need to differentiate between stationary and moving maintenance. Stationary maintenance can be thought of having demand for product at a specific location, or point, while moving maintenance is more like the demand for product being distributed along a line between two points. Secondly, when performing maintenance, trains may be subject to speed restrictions, be delayed, or be rerouted, all of which affect the infrastructure assets and their condition differently. Finally, the long-term maintenance schedules that are of interest are developed annually. IRPs with such a long planning horizon are intractable to direct solution approaches and therefore require the development of customised solution methodologies.
This week we shall continue by introducing the cast of characters to be used for producing minimal-order graphs with dihedral automorphism group.
Many governments and international finance organisations use a carbon price in cost-benefit analyses, emissions trading schemes, quantification of energy subsidies, and modelling the impact of climate change on financial assets. The most commonly used value in this context is the social cost of carbon (SCC). Users of the social cost of carbon include the US, UK, German, and other governments, as well as organisations such as the World Bank, the International Monetary Fund, and Citigroup. Consequently, the social cost of carbon is a key factor driving worldwide investment decisions worth many trillions of dollars.
The social cost of carbon is derived using integrated assessment models that combine simplified models of the climate and the economy. One of three dominant models used in the calculation of the social cost of carbon is the Dynamic Integrated model of Climate and the Economy, or DICE. DICE contains approximately 70 parameters as well as several exogenous driving signals such as population growth and a measure of technological progress. Given the quantity of finance tied up in a figure derived from this simple highly parameterized model, understanding uncertainty in the model and capturing its effects on the social cost of carbon is of paramount importance. Indeed, in late January this year the US National Academies of Sciences, Engineering, and Medicine released a report calling for discussion on the various types of uncertainty in the overall SCC estimation approach and addressing how different models used in SCC estimation capture uncertainty.
This talk, which focuses on the DICE model, essentially consists of two parts. In Part One, I will describe the social cost of carbon and the DICE model at a high-level, and will present some interesting preliminary results relating to uncertainty and the impact of realistic constraints on emissions mitigation efforts. Part one will be accessible to a broad audience and will not require any specific technical background knowledge. In Part Two, I will provide a more detailed description of the DICE model, describe precisely how the social cost of carbon is calculated, and indicate ongoing developments aimed at improving estimates of the social cost of carbon.
Konig (1936) asked whether every finite group G is realized as the automorphism group of a graph. Frucht answered the question in the affirmative and his answer involved graphs whose orders were substantially bigger than the orders of the groups leading to the question of finding the smallest graph with a fixed automorphism group. We shall discuss some of the early work on this problem and some recent results for the family of dihedral groups.
For over 25 years, Wolfram Research has been serving Educators and Researchers. In the past 5 years, we have introduced many award winning technology innovations like Wolfram|Alpha Pro, Wolfram SystemModeler, Wolfram Programming Lab, and Natural Language computation. Join Craig Bauling as he guides us through the capabilities of Mathematica. Craig will demonstrate the key features that are directly applicable for use in teaching and research. Topics of this technical talk include
Prior knowledge of Mathematica is not required - new users are encouraged. Current users will benefit from seeing the many improvements and new features of Mathematica 11.
We discuss ongoing work in convex and non-convex optimization. In the convex setting, we use symbolic computation to study problems which require minimizing a function subject to constraints. In the non-convex setting, we use a variety of computational means to study the behavior of iterated Douglas-Rachford method to solve feasibility problems, finding an element in the intersection of several sets.
The formation of high-mass stars (> 8 times more massive than our sun) poses an enormous challenge in modern astrophysics. Theoretically, it is difficult to understand whether the final mass of a high-mass star is accreted locally or from afar. Observationally, it is difficult to observe the early cold stages because they have relatively short lifetimes and also occur in very opaque molecular clouds. These early stages, however, can be probed by emission from molecular lines emitting at centimetre, millimetre, and sub-millimetre wavelengths. Our recent work clearly demonstrates that dense molecular clumps embedded in the filamentary "Infrared Dark Clouds" spawn high-mass stars, and these the clumps evolve as star-formation activity progresses within them. We have now identified hundreds of clumps in the earliest "pre-stellar" stage. Our MALT90 and RAMPS surveys reveal that these clumps are collapsing, confirming a prediction from "competitive accretion" models. New observations with the ALMA telescope demonstrate that turbulence--and not gravity--dominates the structure of "the Brick", the Milky Way's most massive "pre-stellar" clump.
Come join us for a discussion and public forum on 'Creativity & Mathematics' at Newcastle Museum on Monday, 1st August. We've lined up world leading experts from a diverse set of disciplines to shed some light on the connection between creativity and mathematics.
It's free, but please register for catering purposes. It begins at 6:30 pm with finger food and a chat before the forum itself gets under way at 7 pm.
The panel discussion and forum will have lots of audience involvement. The panel members are from a diverse group of disciplines each concerned in some way with the relationship between creativity and mathematics. Prof. John Wilson (The University of Oxford), a leading expert on group theory, is intrigued by the similarities between mathematicians finding new ideas and composers creating new music. Prof. George Willis (University of Newcastle) will talk about the creativity of mathematics itself. Prof. Michael Ostwald will spin gold around mathematical constraints and architectural forms. A/Prof. Phillip McIntyre is an international expert on creativity and author of The Creative System in Action. He has been described as having a mind completely unpolluted by mathematics!
Come along and enjoy an evening of mental stimulation and unexpected insights. You never know: participants might walk away with a completely different view of mathematics and its place in the world.
The standard height function $H(\mathbf p/q) = q$ of simultaneous approximation can be calculated by taking the LCM (least common multiple) of the denominators of the coordinates of the rational points: $H(p_1/q_1,\ldots,p_d/q_d) = \mathrm{lcm}(q_1,\ldots,q_m)$. If the LCM operator is replaced by another operator such as the maximum, minimum, or product, then a different height function and thus a different theory of simultaneous approximation will result. In this talk I will discuss some basic results regarding approximation by these nonstandard height functions, as well as mentioning their connection with intrinsic approximation on Segre manifolds using standard height functions. This work is joint with Lior Fishman.
Dr Simmons is a visitor of Dr Mumtaz Hussain.
Let $\Sigma_d^{++}(\R)$ be the set of positive definite matrices with determinant 1 in dimension $d\ge 2$. Identifying two $SL_d(\Z)$-congruent elements in $\Sigma_d^{++}(\R)$ gives rise to the space of reduced quadratic forms of determinant one, which in turn can be identified with the locally symmetric space $X_d:=SL_d(\Z)\backslash SL_d(\R)\slash SO_d(\R)$. Equip the latter space with its natural probability measure coming from the Haar measure on $SL_d(\R)$. In 1998, Kleinbock and Margulis established very sharp estimates for the probability that an element of $X_d$ takes a value less than a given real number $\delta>0$ over the non-zero lattice points $\Z^d\backslash\{ \bm{0} \}$.
This talk will be concerned with extensions of such estimates to a large class of probability measures arising either from the spectral or the Cholesky decomposition of an element of $\Sigma_d^{++}(\R)$. The sharpness of the bounds thus obtained are also established for a subclass of these measures.
This theory has been developed with a view towards application to Information Theory. Time permitting, we will briefly introduce this topic and show how the estimates previously obtained play a crucial role in the analysis of the perfomance of communication networks.
This is work joint with Evgeniy Zorin (University of York). Dr Adiceam is a visitor of Dr Mumtaz Hussain.
The finite element method is a very popular technique to approximate solutions of partial differential equations. The mixed finite element method is a type of finite element method in which extra variables are introduced in the formulation. This introduction is useful for some problems where more than one unknowns are desirable. In this research, we will apply the mixed finite element method for some applications, such as Poisson equation, elasticity equation, and sixth-order problem. Furthermore, we also utilise the mixed finite element method to solve linear wave equation which arises from real world problem.
The density of 1's in the Kolakoski sequence is conjectured to be 1/2. Proving this is an open problem in number theory. I shall cast the density question as a problem in combinatorics, and give some visualisations which may suggest ways to gain further insight into the conjecture.
I continue the discussion of the Erdos-Szekeres conjecture about points in convex position with an outline of the recent proof of an asymptotic version of the conjecture.
In 1935 Erdos and Szekeres proved that there exists a function f such that among f(n) points in the plane in general position there are always n that form the vertices of a convex n-gon. More precisiely, they could prove a lower and an upper bound for f(n) and conjectured that the lower bound is sharp. After 70 years with very limited progress, there have been a couple of small improvements of the upper bound in recent years, and finally last month Andrew Suk announced a huge step forward: a proof of an asymptotic version of the conjecture.
I plan two talks on this topic: (1) a brief introduction to Ramsey theory, and (2) an outline of Suk's proof.
Zero forcing number, Z(G), of a graph G is the minimum cardinality of a set S of black vertices (whereas vertices in V (G)\S are colored white) such that V (G) is turned black after finitely many applications of "the color-change rule": a white vertex is converted black if it is the only white neighbor of a black vertex.
Zero forcing number was introduced by the "AIM Minimum Rank – Special Graphs Work Group". In this talk, I present an overview of the results obtained from their paper.
In 2000, after investigating the published literature(for which I had reason then), I realised that there was clearly confusion surrounding the question of how WW2 Japanese army and navy codes had been broken by the Allies.
Fourteen years later, my academic colleague Peter Donovan and I understood why that was so: the archival documents needed to perform this task, plus the mathematical understanding needed to interpret correctly these documents, had only exposed themselves through our combined researches over this long period. The result, apart from a number of research publications in journals, is our book, "Code Breaking in the Pacific", published by Springer International in 2014.
Both the Imperial Japanese Army (IJA) and the Imperial Japanese Navy (IJN) used an encryption system involving a code book and then a second stage encipherment, a system which we call an additive cipher system, for their major codes – not a machine cipher such as the Enigma machines used widely by German forces in ww2 or the Typex/Sigaba/ECM machines used by the Allies. Thus, the type of attack needed to crack such a system is very different to those described in books about Bletchley Park and its successes against Enigma ciphers.
However, there is a singular difference: while the IJN’s main coding system, known to us as JN-25, was broken from its inception and throughout the Pacific War, yielding for example the intelligence information that enabled the battles of the Coral Sea and Midway to occur, or the shooting down of Admiral Yamamoto to be planned, the many IJA coding systems in use were, with one exception, never broken!
I will describe the general structure of additive systems, the rational way developed to attack them and its usual failure in practice, and the "miracle" that enabled JN-25 to be broken - probably the best-kept secret of the entire Pacific War: multiples of three! Good maths, but not highly technical!
Lehmer's famous question concerns the existence of monic integer coefficient polynomials with Mahler measure smaller than a certain constant. Despite significant partial progress, the problem has not been fully resolved since its formulation in 1933. A powerful result independently proven by Lawton and Boyd in the 1980s establishes a connection between the classical Mahler measure of single variable polynomials and the generalized Mahler measure of multivariate polynomials. This led to speculation that it may be possible to answer Lehmer's question in the affirmative with a multivariate polynomial although the general consensus among researchers today is that no such polynomial exists. We show that each possible candidate among two variable polynomials corresponding to curves of genus 1 can be bi-rationally mapped onto a polynomial with Mahler measure greater than Lehmer's constant. Such bi-rational maps are expected to preserve the Mahler measure for large values of a certain parameter.
Milutin is a completing Honours Student of Wadim Zudilin.
A metric generator is a set W of vertices of a graph G such that for every pair of vertices u,v of G, there exists a vertex w in W with the condition that the length of a shortest path from u to w is different from the length of a shortest path from v to w. In this case the vertex w is said to resolve or distinguish the vertices u and v. The minimum cardinality of a metric generator for G is called the metric dimension. The metric dimension problem is to find a minimum metric generator in a graph G. In this talk I am discussing about the metric dimension and partition dimension of Cayley (di)graphs.
This week I shall finish my discussion of sequenceable and R-sequenceable groups.
I am now refereeing a manuscript on the above and I’ll tell you about its contents.
Start by placing piles of indistinguishable chips on the vertices of a graph. A vertex can fire if it's supercritical; i.e., if its chip count exceeds its valency. When this happens, it sends one chip to each neighbour and annihilates one chip. Initialize a game by firing all possible vertices until no supercriticals remain. Then drop chips one-by-one on randomly selected vertices, at each step firing any supercritical ones. Perhaps surprisingly, this seemingly haphazard process admits analysis. And besides having diverse applications (e.g., in modelling avalanches, earthquakes, traffic jams, and brain activity), chip-firing reaches into numerous mathematical crevices. The latter include, alphabetically, algebraic combinatorics, discrepancy theory, enumeration, graph theory, stochastic processes, and the list could go on (to zonotopes). I'll share some joint work—with Dave Perkins—that touches on a few items from this list. The talk'll be accessible to non-specialists. Promise!
B. Gordon (1961) defined sequenceable groups and G. Ringel (1974) defined R-sequenceable groups. Friedlander, Gordon and Miller conjectured that finite abelian groups are either sequenceable or R-sequenceable. The preceding definitions are special cases of what T. Kalinowski and I are calling an orthogonalizeable group, namely, a group for which every Cayley digraph on the group admits either an orthogonal directed path or an orthogonal directed cycle. I shall go over the history and current status of this topic along with a discussion about the completion of a proof of the FGM conjecture.
Mapping class groups are groups which arise naturally from homeomorphisms of surfaces. They are ubiquitous: from hyperbolic geometry, to combinatorial group theory, to algebraic geometry, to low dimensional topology, to dynamics. Even to this colloquium!
In this talk, I will give a survey of some of the highlights from this beautiful world, focusing on how mapping class groups interact with covering spaces of surfaces. In particular, we will see how a particular order 2 element (the hyperelliptic involution) and its centraliser (the hyperelliptic mapping class group) play an important role, both within the world of mapping class groups and in other areas of mathematics. If time permits, I will briefly touch on some recent joint work with Rebecca Winarski that generalises the hyperelliptic story.
No experience with mapping class groups will be assumed, and this talk will be aimed at a general mathematics audience.
The discrepancy of a graph measures how evenly its edges are distributed. I will talk about a lower bound which was proved by Bollobas and Scott in 2006, and extends older results by Erdos, Goldberg, Pach and Spencer. The proof provides a nice illustration of the probabilistic method in combinatorics. If time allows I will outline how this stuff can be used to prove something about convex hulls of bilinear functions.
In this talk, I will outline my interest in, and results towards, the Erdős Discrepancy Problem (EDP). I came about this problem as a PhD student sometime around 2007. At the time, many of the best number theorists in the world thought that this problem would outlast the Riemann hypothesis. I had run into some interesting examples of some structured sequences with very small growth, and in some of my early talks, I outlined a way one might be able to attack the EDP. As it turns out, the solution reflected quite a bit of what I had guessed. And I say 'guessed' because I was so young and naïve that my guess was nowhere near informed enough to actually have the experience behind it to call it a conjecture. In this talk, I will go into what I was thinking and provide proof sketches of what turn out to be the extremal examples of EDP.
How confident are you in your choice? Such a simple but important question for people to answer. Yet, capturing how people answer this question has proven challenging for mathematical models of cognition. Part of the challenge has been that these models assume confidence is a static variable based on the same information used to make a decision. In the first part of my talk, I will review my dynamic theory of confidence, two-stage dynamic signal detection theory (2DSD). 2DSD is based on the premise that the same evidence accumulation process that underlies choice is used to make confidence judgments, but that post-decisional processing of information contributes to confidence judgments. Thus, 2DSD correctly predicts that the resolution of confidence judgments, or their ability to discriminate between correct and incorrect choices, increases over time. However, I have also found that the dynamics of confidence is driven by other factors including the very act of making a choice. In the second of the part of the talk, I will show how 2DSD and other models derived from classical stochastic theories are unable to parsimoniously account for this stable interference effect of choice. In contrast, quantum random walk modes of evidence accumulation account for this property by treating judgments and decisions as a measurement process by which a definite state is created from an indefinite state. In summary, I hope to show how better understanding the dynamic nature of confidence can provide new methods for improving the accuracy of people’s confidence, but also reveal new properties of the deliberation process including perhaps the quantum nature of evidence accumulation.
I'll continue to discuss Frankl's union-closed sets conjecture. In particular I'll present two possible approaches (local configurations and averaging) and indicate obstacles to proving the general case using these methods.
We report upon insights gained into the BMath through: "Conversations: BMath Experiences". This is a project that was initiated by the BMath Convener in collaboration with NUMERIC. We invited first year BMath students to semi-structured conversations around their experiences in their degree. We will be sharing general insights that we have gained into the BMath through the project.
Speakers: Mike Meylan, Andrew Kepert, Liz Stojanovski and Judy-anne Osborn.
I’m going to give a summary of research projects I have been involved in over my study leave; they represent a shared theme: retailing. The projects which I’m going to talk about are:
Start labelling the vertices of the square grid with 0's and 1's with the condition that any pair of neighbouring vertices cannot both be labelled 1. If one considers the 1's to be the centres of small squares (rotated 45 degrees) then one has a picture of square-particles that cannot overlap.
This problem of "hard-squares" appears in different areas of mathematics - for example it has appeared separately as a lattice gas in statistical mechanics, as independent sets in combinatorics and as the golden-mean shift in symbolic dynamics. A core question in this model is to quantify the number of legal configurations - the entropy. In this talk I will discuss the what is known about the entropy and describe our recent work finding rigorous and precise bounds for hard-squares and related problems.
This is work together with Yao-ban Chan.
Peter Frankl's union-closed sets conjecture, which dates back to (at least) 1979, states that for every finite family of sets which is closed under taking unions there is an element contained in at least half of the sets. Despite considerable efforts the general conjecture is still open, and the latest polymath project is an attempt to make progress. I will give an overview of equivalent variants of the conjecture and discuss known special cases and partial results.
Model sets, which go back to Yves Meyer (1972), are a versatile class of structures with amazing harmonic properties. They are particularly relevant for mathematical quasicrystals. More recently, also systems such as the square-free integers or the visible lattice points have been studied in this context, leading to the theory of weak model sets. This talk will review some of the development, and introduce some of the concepts in the field.
We will review the (now classical) scheme of basic ($q$-) hypergeometric orthogonal polynomials. It contains more than twenty families; for each family there exists at least one positive weight with respect to which the polynomials are orthogonal provided the parameter $q$ is real and lies between 0 and 1. In the talk we will describe how to reduce the scheme allowing the parameters in the families to be complex. The construction leads to new orthogonality properties or to generalization of known ones to the complex plane.
The Degree/Diameter Problem for graphs has its motivation in the efficient design of interconnection networks. It seeks to find the maximum possible order of a graph with a given (maximum) degree and diameter. It is known that graphs attaining the maximum possible value (the Moore bound) are extremely rare, but much activity is focussed on finding new examples of graphs or families of graph with orders approaching the bound as closely as possible. This problem was first mention in 1964 and has its motivation in the efficient design of interconnection networks. A lot of great mathematician studied this problem and obtained some results but there still remain a lot of unsolved problems about this subject. Our regretted professor Mirka Miller has given a great expansion to this problems and a lot of new results were given by her and her students. One of the problem she was recently interested in, was the Degree/Diameter problem for mixed graphs i.e. graphs in which we allow undirected edges and arcs (directed edges).
Some new result about the upper bound of this Moore mixed graph has been obtained in 2015. So this talk consists on giving the main known results about those graphs.
In this presentation we address the issues and challenges for Future of Education and how Maplesoft is committed to offers Tools such as Möbius™ to handle these challenges. Möbius is a comprehensive online courseware environment that focuses on science, technology, engineering, and mathematics (STEM). It is built on the notion that people learn by doing. With Möbius, your students can explore important concepts using engaging, interactive applications, visualize problems and solutions, and test their understanding by answering questions that are graded instantly. Throughout the entire lesson, students remain actively engaged with the material and receive constant feedback that solidifies their understanding.
When you use Möbiusto develop and deliver your online offerings, you remain in full control of your content and the learning experience.
For more information on Möbiusplease visit http://maplesoft.com/products/Mobius/.
An order picking system in a distribution center (DC) owned by Pep Stores Ltd. (PEP) the largest single brand retailer in South Africa is investigated. Twelve independent unidirectional picking lines situated in the center of the DC are used to process all piece picking. Each picking line consists of a number of locations situated in a cyclical formation around a central conveyor belt and are serviced by multiple pickers walking in a clockwise direction.
On a daily planning level three sequential decisions tiers exist and are described as:
These sub-problems are too complex to solve together and are addressed independently and in reverse sequence using mathematical programming and heuristic techniques. It is shown that the total walking distances of pickers may be significantly reduced when solving sub-problems 1 and 3 and that there is no significant impact when solving sub-problem 2. Moreover, by introducing additional work balance and small carton minimisation objectives into sub-problem 1 better trade-offs between objectives are achieved when compared to the current practice.
A diophantine m-tuple is a set of m-positive integers {a_1, . . . , a_m} such that the product of any two of them plus 1 is a square. For example, {1, 3, 8, 120} is a Diophantine quadruple found by Fermat. It is known that there are infinitely many such examples with m = 4 and none with m = 6. No example is known with m = 5 but if there exist, then there are only finitely many such. In my talk, I will survey what is known about this problem, as well as its variations, where one replaces the ring of integers by the ring of integers in some finite extension of Q, or by the field of rational numbers, or one looks at a variant of this problem in the ring of polynomials with coefficients in a field of characteristic zero, or when one replaces the squares by perfect powers of a larger exponent, or by members of some other interesting sequence like the sequence of Fibonacci numbers and so on.
This talk is devoted to three basic forms of the inverse function theorem. The classical inverse function theorem identify a smooth single-valued localization of the inverse on the condition of nonsingularity of the Jacobian.
I will explain what groups are and give some examples and applications.
At the 1987 Ramanujan Centenary meeting Dyson asked for a coherent group-theoretical structure for Ramanujan's mock theta functions analogous to Hecke's theory of modular forms. We extend the work of Bringmann and Ono, and Ahlgren and Treneer on answering this question.
Firstly, from [1] we consider a mixed formulation for an elliptic obstacle problem for a 2nd order operator and present an hp-FE interior penalty discontinous Galerkin (IPDG) method. The primal variable is approximated by a linear combination of Gauss-Lobatto-Lagrange(GLL)-basis functions, whereas the discrete Lagrangian multiplier is a linear combination of biorthogonal basis functions. A residual based a posteriori error estimate is derived. For its construction the approximation error is split into a discretization error of a linear variational equality problem and additional consistency and obstacle condition terms.
Secondly, an hp-adaptive $C^0$-interior penalty method for the bi-Laplace obstacle problem is presented from [2]. Again we take a mixed formulation using GLL-basis functions for the primal variable and biorthogonal basis functions for the Lagrangian multiplier and present also a residual a posteriori error estimate. For both cases (2nd and 4th order obstacle problems) our numerical experiments clearly demonstrate the superior convergence of the hp-adaptive schemes compared with uniform and h-adaptive schemes.
References
[1] L.Banz, E.P.Stephan, A posteriori error estimates of hp-adaptive IPDG-FEM for elliptic
obstacle problems, Applied Numerical Mathematics 76,(2014) 76-92
[2] L.Banz, B.P.Lamichhane, E.P.Stephan, An hp-adaptive $C^0$-interior penalty method for the
obstacle problem of clamped Kirchhoff plates, preprint (2015)
(Joint work with Lothar Banz, University Salzburg, Austria)
We'll answer the question "What's a wavelet?" and discuss continuous wavelet transforms on the line and connections with representation theory and singular integrals. The focus will then turn to discretization techniques, including multiresolution analysis. Matrix completion problems arising from higher-dimensional wavelet constructions will also be described.
I am going to discuss a construction of functional calculus $$f\mapsto f(A,B),$$ where $A$ and $B$ are noncommuting self-adjoint operators. I am going to discuss the problem of estimating the norms $\|f(A_1,B_1)-f(A_2,B_2)\|$, where the pair $(A_2,B_2)$ is a perturbation of the pair $(A_1,B_1)$.
The use of GPUs for scientific computation has undergone phenomenal growth over the past decade, as hardware originally designed with limited instruction sets for image generation and processing has become fully programmable and massively parallel. This talk discusses the classes of problem that can be attacked with such tools, as well as some practical aspects of implementation. A direction for future research by the speaker is also discussed.
We consider identities satisfied by discrete analogues of Mehta-like integrals. The integrals are related to Selberg’s integral and the Macdonald conjectures. Our discrete analogues have the form
$$S_{\alpha,\beta,\delta} (r,n) := \sum_{k_1,...,k_r\in\mathbb{Z}} \prod_{1\leq i < j\leq r} |k_i^\alpha - k_j^\alpha|^\beta \prod_{j=1}^r |k_j|^\delta \binom{2n}{n+k_j},$$where $\alpha,\beta,\delta,r,n$ are non-negative integers subject to certain restrictions.
In the cases that we consider, it is possible to express $S_{\alpha,\beta,\delta} (r,n)$ as a product of Gamma functions and simple functions such as powers of two. For example, if $1 \leq r \leq n$, then $$S_{2,2,3} (r,n) = \prod_{j=1}^r \frac{(2n)!j!^2}{(n-j)!^2}.$$
The emphasis of the talk will be on how such identities can be obtained, with a high degree of certainty, using numerical computation. In other cases the existence of such identities can be ruled out, again with a high degree of certainty. We shall not give any proofs in detail, but will outline the ideas behind some of our proofs. These involve $q$-series identities and arguments based on non-intersecting lattice paths.
This is joint work with Christian Krattenthaler and Ole Warnaar.
We consider the stability of a class of abstract positive systems originating from the recurrence analysis of stochastic systems, such as multiclass queueing networks and semimartingale reflected Brownian motions. We outline that this class of systems can also be described by differential inclusions in a natural way. We will point out that because of the positivity of the systems the set-valued map defining the differential inclusion is not upper semicontinuous in general and, thus, well-known characterizations of asymptotic stability in terms of the existence of a (smooth) Lyapunov function cannot be applied to this class of positive systems. Following an abstract approach, based on common properties of the positive systems under consideration, we show that asymptotic stability is equivalent to the existence of a Lyapunov function. Moreover, we examine the existence of smooth Lyapunov functions. Putting an assumption on the trajectories of the positive systems which demands for any trajectory the existence of a neighboring trajectory such that their difference grows linearly in time and distance of the starting points, we prove the existence of a $C^\infty$-smooth Lyapunov function. Looking at this hypothesis from the differential inclusions perspective it turns out that differential inclusions defined by Lipschitz continuous set-valued maps taking nonempty, compact and convex values have this property.
We will be answering the following question raised by Christopher Bishop:
'Suppose we stand in a forest with tree trunks of radius $r > 0$ and no two trees centered closer than unit distance apart. Can the trees be arranged so that we can never see further than some distance $V < \infty$, no matter where we stand and what direction we look in? What is the size of $V$ in terms of $r$?'
The methods used to study this problem involve Fourier analysis and sharp estimates of exponential sums.
A dimension adaptive algorithm for sparse grid quadrature in reproducing kernel Hilbert spaces on products of spheres uses a greedy algorithm to approximately solve a down-set constrained binary knapsack problem. The talk will describe the quadrature problem, the knapsack problem and the algorithm, and will include some numerical examples.
We discuss problems of approximation of an irrational by rationals whose numerators and denominators lie in prescribed arithmetic progressions. Results are both, on the one hand, from a metrical and a non-metrical point of view, and on the other, from an asymptotic and also a uniform point of view. The principal novelty of this theory is a Khintchine-type theorem for uniform approximation in this setup. Time permitting some applications of this work will be discussed.
I will talk a bit about the benefits of a regular outlook.
We study the family of self-inversive polynomials of degree $n$ whose $j$th coefficient is $\gcd(n,j)^k$, for a fixed integer $k \geq 1$. We prove that these polynomials have all of their roots on the unit circle, with uniform angular distribution. In the process we prove some new results on Jordan's totient function. We also show that these polynomials are irreducible, apart from an obvious linear factor, whenever $n$ is a power of a prime, and conjecture that this holds for all $n$. Finally we use some of these methods to obtain general results on the zero distribution of self-inversive polynomials and of their "duals" obtained from the discrete Fourier transforms of the coefficients sequence. (Joint work with Sinai Robins).
A motion which is periodic may be considered symmetric under a transformation in time. A measure of the phase relationship these motions have with respect to a geometric figure which is symmetric under some transformation in space is presented. The implications this has on discretised patterns generated is discussed. The talk focuses on theoretical formalisms, such as those which display the fractal patterns of 'strange attractors', rather than group theory for symmetric transformations.
We consider monotone systems defined by ODEs on the positive orthant in $\mathbb{R}^n$. These systems appear in various areas of application, and we will discuss in some level of detail one of these applications related to large-scale systems stability analysis.
Lyapunov functions are frequently used in stability analysis of dynamical systems. For monotone systems so called sum- and max-separable Lyapunov functions have proven very successful. One can be written as a sum, the other as a maximum of functions of scalar arguments.
We will discuss several constructive existence results for both types of Lyapunov function. To some degree, these functions can be associated with left- and right eigenvectors of an appropriate mapping. However, and perhaps surprisingly, examples will demonstrate that stable systems may admit only one or even neither type of separable Lyapunov function.
In scanning ptychography, an unknown specimen is illuminated by a localised illumination function resulting in an exit-wave whose intensity is observed in the far-field. A ptychography dataset is a series of these observations, each of which is obtained by shifting the illumination function to a different position relative to the specimen with neighbouring illumination regions overlapping. Given a ptychographic data set, the blind ptychography problem is to simultaneously reconstruct the specimen, illumination function, and relative phase of the exit-wave. In this talk I will discuss an optimisation framework which reveals current state-of-the-art reconstruction methods in ptychography as (non-convex) alternating minimization-type algorithms. Within this framework, we provide a proof of global convergence to critical points using the Kurdyka-Łojasiewicz property.
We use random walks to experimentally compute the first few terms of the cogrowth series for a finitely presented group. We propose candidates for the amenable radical of any non-amenable group, and a Følner sequence for any amenable group, based on convergence properties of random walks.
The Hardy and Paley-Wiener Spaces are defined due to important structural theorems relating the support of a function's Fourier transform to the growth rate of the analytic extension of a function. In this talk we show that analogues of these spaces exist for Clifford-valued functions in n dimensions, using the Clifford-Fourier Transform of Brackx et al and the monogenic ($n+1$ dimensional) extension of these functions.
Given a finite presentation of a group, proving properties of the group can be difficult. Indeed, many questions about finitely presented groups are unsolvable in general. Algorithms exist for answering some questions while for other questions algorithms exist for verifying the truth of positive answers. An important tool in this regard is the Todd-Coxeter coset enumeration procedure. It is possible to extract formal proofs from the internal working of coset enumerations. We give examples of how this works, and show how the proofs produced can be mechanically verified and how they can be converted to alternative forms. We discuss these automatically produced proofs in terms of their size and the insights they offer. We compare them to hand proofs and to the simplest possible proofs. We point out that this technique has been used to help solve a longstanding conjecture about an infinite class of finitely presented groups.
We survey the literature on orthogonal polynomials in several variables starting from Hermite's work in the late 19th century to the works of Zernike (1920's) and Ito (1950's). We explore combinatorial and analytic properties of the Ito polynomials and offer a general class in 2 dimensions which as interesting structural properties. Connections with certain PDE's will be mentioned.
We propose new path-following predictor-corrector algorithms for solving convex optimization problems in conic form. The main structural properties used in our design and analysis of the algorithms hinge on some key properties of a special class of very smooth, strictly convex barrier functions. Even though our analysis has primal and dual components, our algorithms work with the dual iterates only, in the dual space. Our algorithms converge globally at the same worst-case rate as the current best polynomial-time interior-point methods. In addition, our algorithm have the local superlinear convergence property under some mild assumptions. The algorithms are based on an easily computable gradient proximity measure, which ensures an automatic transformation of the global linear rate of convergence to the locally superlinear one under some mild assumptions. Our step-size procedure for the predictor step is related to the maximum step size (the one that takes us to the boundary).
This talk is based on joint work with Yu. Nesterov.
Lift-and-Project operators (which map compact convex sets to compact convex sets in a certain contractive way, via higher dimensional convex representations of these sets) provide an automatic way for constructing all facets of the convex hull of 0,1 vectors in a polytope given by linear or polynomial inequalities. They also yield tractable approximations provided that the input polytope is tractable and that we only apply the operators O(1) times. There are many generalizations of the theory of these operators which can be used, in theory, to generate (eventually, in the limit) arbitrarily tight, convex relaxations of essentially arbitrary nonconvex sets. Moreover, Lift-and-Project methods provide universal ways of applying Semidefinite Programming techniques to Combinatorial Optimization problems, and in general, to nonconvex optimization problems.
I will survey some of the developments (some recent, some not so recent) that I have been involved in, especially those utilizing Lift-and-Project methods and Semidefinite Optimization. I will touch upon the connections to Convex Algebraic Geometry and present various open problems.
In this talk we will begin with a brief history of the mathematics of aperiodic tilings of Euclidean space, highlighting their relevance to the theory of quasicrystals. Next we will focus on an important collection of point sets, cut and project sets, which come from a dynamical construction and provide us with a mathematical model for quasicrystals. After giving definitions and examples of these sets, we will discuss their relationship with Diophantine approximation, and show how the interplay between these two subjects has recently led to new results in both of them.
I will complete the proof of the Kemnitz conjecture and make some remarks about related zero-sum problems.
Supervisors: Mirka Miller, Joe Ryan and Andrea Semanicova-Fenovcikova
We give some background to the labeling schemes like graceful, harmonious, magic, antimagic and irregular total labeling. Then we will describe why study of graph labeling is important by narrating some applications of graph labeling. Next we will briefly describe the methodology like Robert's construction to obtain completing separating systems (CSS) which will help us to determine the antimagic labeling of graphs and Alon's Combinatorial Nullstellensatz. We will illustrate an example from many applications of graphs labelling. Finally we will introduce reflexive irregular total labelling and explain its importance. To conclude, we add research plan and time line during candidature of research.
After briefly describing a few more simple applications of Alon's Nullstellensatz, I will present in detail Reiher's amazing proof of the Kemnitz conjecture regarding lattice points in the plane.
Noga Alon's Combinatorial Nullstellensatz, published in 1999, is a statement about polynomials in many variables and what happens if one of these vanishes over the set of common zeros of some others. In contrast to Hilbert's Nullstellensatz, it makes strong assumptions about the polynomials it is talking about, and this leads a tool for producing short and elegant proofs for numerous old and new results in combinatorial number theory and graph theory. I will present the proof of the algebraic result and some of the combinatorial applications in the 1999 paper.
Stability analysis plays a central role in nonlinear control and systems theory. Stability is, in fact, the fundamental requirement for all the practical control systems. In this research, advanced stability analysis techniques are reviewed and developed for discrete-time dynamical systems. In particular, we study the relationships between the input-to-state stability related properties and l¬2-type stability properties. These considerations naturally lead to the study of input-output models and, further, to questions of incremental stability and convergent dynamics. Future work will also outline several applications scenario for our theories including observer analysis and secure communication.
Supervisors: A/Prof. Christopher Kellett and Dr. Björn Rüffer
Existing of perfect matchings in regular graph is a fundamental problem in graph theory, and it closely model many real world problems such as broadcasting and network management. Recently, we have studied the number of edge disjoint perfect matching in regular graph, and using some well-known results on the existence of perfect matching and operations forcing unique perfect matchings in regular graph, we are able to make some pleasant progress. In this talk, we will present the new results and briefly discuss the proof.
In either the inviscid limit of the Euler equations, or the viscously dominated limit of the Stokes equations, the determination of fluid flows can be reduced to solving singular integral equations on immersed structures and bounding surfaces. Further dimensional reduction is achieved using asymptotics when these structures are sheets or slender fibers. These reductions in dimension, and the convolutional second-kind structure of the integral equations, allows for very efficient and accurate simulations of complex fluid-structure interaction problems using solvers based on the Fast Multipole or related methods. These representations also give a natural setting for developing implicit time-stepping methods for the stiff dynamics of elastic structures moving in fluids. I'll discuss these integral formulations, their numerical treatment, and application to simulating structures moving in high-speed flows (flapping flags and flyers), and for resolving the complex interactions of many, possibly flexible, bodies moving in microscopic biological flows.
Partitioning is a basic fundamental technique in graph theory. Graph partitioning technique is used widely to solve several combinatorial problems. We will discuss the role of edge partitioning techniques on graph embedding. The graph embedding includes some combinatorial problems such as bandwidth problem, wirelength problem, forwarding index problem etc and in addition includes some cheminformatics problems such as Wiener Index, Szeged Index, PI index etc. In this seminar, we study convex partition and its characterization. In addition, we also analyze the relationship between convex partition and some other edge partitions such as Szeged edge partition and channel edge partition. The graphs that induce convex partitions are bipartite. We will discuss the difficulties in extending this technique to non-bipartite graphs.
This completion talk is in two parts. In the first part, I will present a characterisation of the cyclic Douglas-Rachford method's behaviour, generalising a result which was presented in my confirmation seminar. In the second part, I will explore non-convex regularity notions in an application arising in biochemistry.
Amenability is of interest for many reasons, not least of which is its paradoxical decomposition into so many various characterisations, each equal to the whole. Two of these are the characterisation in terms of the cogrowth rate, and the existence of a Følner sequence. In exploring a known method of computing the cogrowth rate using a random walk, and by analyzing which groups seem to be pathological for this algorithm, we discover new connections between these properties.
Mathematicians sometimes speak of the beauty of mathematics which to us is reflected in proofs and solutions for the most part. I am going to give a few proofs that I find very nice. This is stuff that post-grad discrete students certainly should know exists.
We study maximal monotone inclusions from the perspective of (convex) gap functions.
We propose a very natural gap function and will demonstrate how this function arises from the Fitzpatrick function — a convex function used effectively to represent maximal monotone operators.
This approach allows us to use the powerful strong Fitzpatrick inequality to analyse solutions of the inclusion.
This is joint work with Joydeep Dutta.
Functions that are piecewise defined are a common sight in mathematics while convexity is a property especially desired in optimization. Suppose now a piecewise-defined function is convex on each of its defining components – when can we conclude that the entire function is convex? Our main result provides sufficient conditions for a piecewise-defined function f to be convex. We also provide a sufficient condition for checking the convexity of a piecewise linear-quadratic function, which play an important role in computer-aided convex analysis.
Based on joint work with Heinz H. Bauschke (Mathematics, UBC Okanagan) and Hung M. Phan (Mathematics, University of Massachusetts Lowell).
I'll give an overview of some recent developments in the theory of groups of automorphisms of trees which are discrete in the full automorphism group of the tree and are locally-transitive. I'll also mention some questions which have been provoked by this work.
We generalize the Burger-Mozes universal groups acting on regular trees by prescribing the local action on balls of a given radius, and study the basic properties of this construction. We then apply our results to prove a weak version of the Goldschmidt-Sims conjecture for certain classes of primitive permutation groups.
In recent years, there has been quite a bit of interest in generalized Fourier transforms in Clifford analysis and in particular for the so-called Clifford-Fourier transform.
In the first part of the talk I will provide some motivation for the study of this transform. In the second part we will develop a new technique to find a closed formula for its integral kernel, based on the familiar Laplace transform. As a bonus, this yields a compact and elegant formula for the generating function of all even dimensional kernels.
Managing railway in general and high speed rail in particular is a very complex task which involves many different interrelated decisions in all three strategic, tactical, and operational phases. In this research two different mixed integer linear programing models are presented which are the literature's first models of their kind. In the first model a single line with two different train types is considered. In the second model a cyclic train timetabling and platforming assignment problems are considered and solved to optimality. For this model, methods for obtaining bounds on the first objective function are presented. Some pre-processing techniques to reduce the number of decision variables and constraints are also proposed. Proposed models' objectives are to minimize (1) the cyclic length, called Interval, and (2) the total journey time of all trains dispatched from their origin in each cycle. Here we explicitly consider the minimization of the cycle length using linear constraints and linear objective function. The proposed models are different from and faster than the widely-used Period Event Scheduling Problem (PESP).
This will be an informal talk from our UoN Engineering colleague Prof Bill McBride who recently visited some "Mid-West" Universities in the USA. Prof McBride will discuss what he saw and learnt, with reference to first year maths teaching for Engineering students.
Advantages of EEG in studying brain signals include excellent temporal localization and, potentially, good spatial localization, given good models for source localization in the brain. Phase synchrony and cross-frequency coupling are two phenomena believed to indicate cooperation of different brain regions in cognition through messaging via different frequency bands. To verify these hypotheses requires ability to extract time-frequency localized components from complex multicomponent EEG data. One such method, empirical mode decompositions, shows increasing promise through engineering and we will review recent progress on this approach. Another potential method uses bases or frames of optimally time-frequency localized signals, so-called prolate spheroidal wave functions. New properties of these functions developed in joint work with Jeff Hogan will be reviewed and potential applications to EEG will be discussed.
Arising originally from the analysis of a family of compressed sensing matrices, Ian Wanless and I recently investigated a number of linear algebra problems involving complex Hadamard matrices. I will discuss our main result, which relates rank-one submatrices of Hadamard matrices to the number of non-zero terms in a representation of a fixed vector with respect to two unbiased bases of a finite dimensional vector space. Only a basic knowledge of linear algebra will be assumed.
Computers are changing the way we do mathematics, as well as introducing new research agendas. Computational methods in mathematics, including symbolic and numerical computation and simulation, are by now familiar. These lectures will explore the way that "formal methods," based on formal languages and logic, can contribute to mathematics as well.
In the 19th century, George Boole argued that if we take mathematics to be the science of calculation, then symbolic logic should be viewed as a branch of mathematics: just as number theory and analysis provide means to calculate with numbers, logic provides means to calculate with propositions. Computers are, indeed, good at calculating with propositions, and there are at least two ways that this can be mathematically useful: first, in the discovery of new proofs, and, second, in verifying the correctness of existing ones.
The first goal generally falls under the ambit of "automated theorem proving" and the second falls under the ambit of "interactive theorem proving." There is no sharp distinction between these two fields, however, and the line between them is becoming increasingly blurry. In these lectures, I will provide an overview of both fields and the interactions between them, and speculate as to the roles they can play in mainstream mathematics.
I will aim to make the lectures accessible to a broad audience. The first lecture will provide a self-contained overview. The remaining lectures are for the most part independent of one another, and will not rely on the first lecture.
In this colloquium-style presentation I will describe these combinatorial objects and how they relate to each other. Time permitting, I will also show how they can be used in other areas of Mathematics. Joint work with Sooran Kang and Samuel Webster.
Computers are changing the way we do mathematics, as well as introducing new research agendas. Computational methods in mathematics, including symbolic and numerical computation and simulation, are by now familiar. These lectures will explore the way that "formal methods," based on formal languages and logic, can contribute to mathematics as well.
In the 19th century, George Boole argued that if we take mathematics to be the science of calculation, then symbolic logic should be viewed as a branch of mathematics: just as number theory and analysis provide means to calculate with numbers, logic provides means to calculate with propositions. Computers are, indeed, good at calculating with propositions, and there are at least two ways that this can be mathematically useful: first, in the discovery of new proofs, and, second, in verifying the correctness of existing ones.
The first goal generally falls under the ambit of "automated theorem proving" and the second falls under the ambit of "interactive theorem proving." There is no sharp distinction between these two fields, however, and the line between them is becoming increasingly blurry. In these lectures, I will provide an overview of both fields and the interactions between them, and speculate as to the roles they can play in mainstream mathematics.
I will aim to make the lectures accessible to a broad audience. The first lecture will provide a self-contained overview. The remaining lectures are for the most part independent of one another, and will not rely on the first lecture.
Starting with a substitution tiling, such as the Penrose tiling, we demonstrate a method for constructing infinitely many new substitution tilings. Each of these new tilings is derived from a graph iterated function system and the tiles typically have fractal boundary. As an application of fractal tilings, we construct an odd spectral triple on a C*-algebra associated with an aperiodic substitution tiling. Even though spectral triples on substitution tilings have been extremely well studied in the last 25 years, our construction produces the first truly noncommutative spectral triple associated with a tiling. My work on fractal substitution tilings is joint with Natalie Frank and Sam Webster, and my work on spectral triples is joint with Michael Mampusti.
The seminar will provide a brief overview of the potential for CT to contribute to quantitative analysis in the Social Sciences. This will be followed by a description of CT as a "Rosetta Stone" linking topology, algebra, computation, and physics together. This carries over to process thinking and circuit analysis. Coecke and Parquette's approach to diagrammatic analysis is examined to emphasize the efficiency of block shifting techniques over diagram chasing. Baez and Erbele's application of CT to feedback control is the main focus of analysis and this is followed by a brief excursion into multicategories (cobordisms), before finishing up with some material on coalgebras and transition systems.
Have you ever tried to add up the numbers 1+1/2+1/3+...? If you've never thought about this before, then give it a go (and don't Google the answer!) In this talk we will settle this relatively easy question and consider how things might change if we try to thin out the sum a bit. For instance, what if we only used the prime numbers 1/2+1/3+1/5+...? Or what about the square numbers 1+1/4+1/9+...? There will be some algebra and integration at times, but if you can add fractions (or use a calculator) then you should follow almost everything.
Computers are changing the way we do mathematics, as well as introducing new research agendas. Computational methods in mathematics, including symbolic and numerical computation and simulation, are by now familiar. These lectures will explore the way that "formal methods," based on formal languages and logic, can contribute to mathematics as well.
In the 19th century, George Boole argued that if we take mathematics to be the science of calculation, then symbolic logic should be viewed as a branch of mathematics: just as number theory and analysis provide means to calculate with numbers, logic provides means to calculate with propositions. Computers are, indeed, good at calculating with propositions, and there are at least two ways that this can be mathematically useful: first, in the discovery of new proofs, and, second, in verifying the correctness of existing ones.
The first goal generally falls under the ambit of "automated theorem proving" and the second falls under the ambit of "interactive theorem proving." There is no sharp distinction between these two fields, however, and the line between them is becoming increasingly blurry. In these lectures, I will provide an overview of both fields and the interactions between them, and speculate as to the roles they can play in mainstream mathematics.
I will aim to make the lectures accessible to a broad audience. The first lecture will provide a self-contained overview. The remaining lectures are for the most part independent of one another, and will not rely on the first lecture.
In this talk I will present the main results of my PhD thesis (by the same name), which focuses on the application of matrix determinants as a means of producing number-theoretic results.
Motivated by an investigation of properties of the Riemann zeta function, we examine the growth rate of certain determinants of zeta values. We begin with a generalisation of determinants based on the Hurwitz zeta function, where we describe the arithmetic properties of its denominator and establish an asymptotic bound. We later employ a determinant identity to bound the growth of positive Hankel determinants. Noting the positivity of determinants of Dirichlet series allows us to prove specific bounds on determinants of zeta values in particular, and of Dirichlet series in general. Our results are shown to be the best that can be obtained from our method of bounding, and we conjecture a slight improvement could be obtained from an adjustment to our specific approach.
Within the course of this investigation we also consider possible geometric properties which are necessary for the positivity of Hankel determinants, and we examine the role of Hankel determinants in irrationality proofs via their connection with Padé approximation.
Supervisor: Murray Elder
Supervisor: Mike Meylan
Supervisor: Wadim Zudulin
I have recently [2] shown that each group $Z_2^{2m}$ gives rise to a pair of bent functions with disjoint support, whose Cayley graphs are a disjoint pair of strongly regular graphs $\Delta_m[-1]$, $\Delta_m[1]$ on $4^m$ vertices. The two strongly regular graphs are twins in the sense that they have the same parameters $(\nu, k, \lambda, \mu)$. For $m < 4$, the two strongly regular graphs are isomorphic. For $m \geq 4$, they are not isomorphic, because the size of the largest clique differs. In particular, the largest clique size of $\Delta_m[-1]$ is $\rho(2^m)$ and the largest clique in $\Delta_m[1]$ has size at least $2^m$, where $\rho(n)$ is the Hurwitz-Radon function. This non-isomorphism result disproves a number of conjectures that I made in a paper on constructions of Hadamard matrices [1].
[1] Paul Leopardi, "Constructions for Hadamard matrices, Clifford algebras, and their relation to amicability - anti-amicability graphs", Australasian Journal of Combinatorics, Volume 58(2) (2014), pp. 214–248.
[2] Paul Leopardi, "Twin bent functions and Clifford algebras", accepted 13 January 2015 by the Springer Proceedings in Mathematics and Statistics (PROMS): Algebraic Design Theory and Hadamard Matrices (ADTHM 2014).
I will explain what an equation in a free group is, why they are interesting, and how to solve them. The talk will be accessible to anyone interested in maths or computer science or logic.
We introduce a subfamily of additive enlargements of a maximally monotone operator $T$. Our definition is inspired by the seminal
work of Fitzpatrick presented in 1988. These enlargements are a subfamily of the family of enlargements introduced by Svaiter in 2000. For the case $T = \partial f$, we prove that some members of the subfamily are smaller than the $\varepsilon$-subdifferential enlargement. For this choice of $T$, we can construct a specific enlargement which coincides with the$\varepsilon$-subdifferential. Since these enlargements are all
additive, they can be seen as structurally closer to the $\varepsilon$-subdifferential enlargement.
Joint work with Juan Enrique Martínez-Legaz (Universitat Autonoma de Barcelona), Mahboubeh Rezaei (University of Isfahan, Iran), and Michel Théra (University of Limoges).
We will talk on the validity of the mean ergodic theorem along left Følner sequences in a countable amenable group G. Although the weak ergodic theorem always holds along any left Følner sequence in G, we will provide examples where the mean ergodic theorem fails in quite dramatic ways. On the other hand, if G does not admit any ICC quotients, e.g. if G is virtually nilpotent, then we will prove that the mean ergodic theorem does indeed hold along any left Følner sequence.
Based on the joint work with M. Bjorklund (Chalmers).
Dengue is caused by four different serotypes, where individuals infected by one of the serotypes obtain lifelong immunity to that serotype but not for the other serotypes. Individuals with secondary infections may attract the more dangerous form of dengue, called dengue hemorrhagic fever (DHF), because of higher viral load. Because of unsustainability of traditional measures, the use of the bacterium Wolbachia has been proposed as an alternative strategy against dengue fever. However, little research has been conducted to study the effectiveness of this intervention in the field. Understanding the effectiveness of this intervention is of importance before it is widely implemented in the real-world. In this talk, I will explain the effectiveness of this intervention and present mathematical models that I have developed to study the effectiveness of this intervention and how these models are different to the existing one. I will also present the effects of the presence of multiple strains of dengue on dengue transmission dynamics.
Supervisors: David Allingham, Roslyn Hickson (IBM), Kathryn Glass (ANU), Irene Hudson
I will discuss a combinatorial problem coming from database design. The problem can be interpreted as maximizing the number of edges in a certain hypergraph subject to a recoverability condition. It was solved recently by the high school student Max Aehle, who came up with a nice argument using the polynomial method.
This presentation will explore the specificities of teaching mathematics in engineering studies that transcend the division between technical, scientific and design disciplines and how students of such studies are different from traditional engineering students. Data comes from a study at the Media Technology Department of Aalborg University in Copenhagen, Denmark. Media Technology is an education that combines technology and creativity and looks at the technology behind areas such as advanced computer graphics, games, electronic music, animations, interactive art and entertainment, to name a few. During the span of the education students are given a strong technical foundation, both in theory and in practice.
The presentation emerges from research of my PhD student Evangelia Triantafylloyu and myself. The study which will be presented here used performance tests, attitude questionnaires, interviews with students and observations of mathematics related courses. The study focused on investigating student performance and retention in mathematics, attitudes towards mathematics, and preferences of teaching and learning methods, including flipped classroom approach using videos produced by course teachers. The outcome of this study can be used to create a profile of a typical student and to tailor approaches for teaching mathematics to this discipline. Moreover, it can be used as a reference point for investigating ways to improve mathematics education in other creative engineering studies.
About the Speaker: Olga Timcenko joined Medialogy department of Aalborg University in Copenhagen in fall 2006, as an Associate Professor. Before joining the University, she was a Senior Technology Consultant in LEGO Business development, LEGO Systems A/S, where she worked for different departments of LEGO on research and development of multimedia materials for children, including LEGO Digital Designer and LEGO Mindstorms NXT. She was active in FIRST LEGO League project (world-wide robotic competition among school children), and Computer-clubhouse project. During 2003-2006, she was LEGO’s team leader in EU-financed Network of excellence in Technology enhanced learning called Kaleidoscope, and actively participated in several Kaleidoscope JERIPs and SIGs. She has a Ph.D. in Robotics from Suddansk University in Odense, Denmark and she is author / co-author of 40+ conference and journal papers in the field of robotics and children and technology, and 4 international patents in the field of virtual 3D worlds / 3D user interfaces for children. Her last project for LEGO was redesign of the Mindstorms iconic programming language for children (the product was launched world-wide in August 2006).
This week I shall conclude my discussion of pancyclicity and Cayley graphs on generalized dihedral groups.
This is joint work with our former honours student Alex Muir. We look at the variety of lengths of cycles in Cayley graphs on generalized dihedral groups.
A mixed formulation for a Tresca frictional contact problem in linear elasticity is considered in the context of boundary integral equations, which is later extended to Coulomb friction. The discrete Lagrange multiplier, an approximation of the surface traction on the contact boundary part, is a linear combination of biorthogonal basis functions. The biorthogonality allows to rewrite the variational inequality constraints as a simple set of complementary problems. Thus, enabling an efficient application of a semi-smooth Newton solver for the discrete mixed problems. Typically, the solution of frictional contact problems is of reduced regularity at the interface between contact to non-contact and from stick to slip. To identify the a priori unknown locations of these interfaces a posteriori error estimations of residual and hierarchical type are introduced. For a stabilized version of our mixed formulation (with the Poincare- Steklov operator) we present also a priori estimates for the solution. Numerical results show the applicability of the error estimators and the superiority of hp-adaptivity compared to low order uniform and adaptive approaches.
Ernst Stephan is a visitor of Bishnu Lamichhane.
This talk showcases some large numbers and where they came from.
In celebration of both a special "big" pi Day (3/14/15) and the 2015 centennial of the Mathematical Association of America, we review the illustrious history of the constant $\pi$ in the pages of the American Mathematical Monthly.
Consider a function from the circle to itself such that the derivative is greater than one at every point. Examples are maps of the form f(x) = mx for integers m > 1. In some sense, these are the only possible examples. This fact and the corresponding question for maps on higher dimensional manifolds was a major motivation for Gromov to develop pioneering results in the field of geometric group theory.
In this talk, I'll give an overview of this and other results relating dynamical systems to the geometry of the manifolds on which they act and (time permitting) talk about my own work in the area.
This is joint work with our former honours student Alex Muir. We look at the variety of lengths of cycles in Cayley graphs on generalized dihedral groups.
When attacking various difficult problems in the field of Diophantine approximation the application of certain topological games has proven extremely fruitful in recent times due to the amenable properties of the associated 'winning' sets. Other problems in Diophantine approximation have recently been solved via the method of constructing certain tree-like structures inside the Diophantine set of interest. In this talk I will discuss how one broad method of tree-like construction, namely the class of 'generalised Cantor sets', can be formalized for use in a wide variety of problems. By introducing a further class of so-called 'Cantor-winning' sets we may then provide a criterion for arbitrary sets in a metric space to satisfy the desirable properties usually attributed to winning sets, and so in some sense unify the two above approaches. Applications of this new framework include new answers to questions relating to the mixed Littlewood conjecture and the $\times2, \times3$ problem. The talk will be aimed at a broad audience.
It is well known that there is a one-to-one correspondence between signed plane graphs and link diagrams via the medial construction. The relationship was once used in knot tabulations in the early time of study in knot theory. Indeed, it provides a method of studying links using graphs. Let $G$ be a plane graph. Let $D(G)$ be the alternating link diagram corresponding to the (positive) $G$ obtained from the medial construction. A state $S$ of $D$ is a labeling each crossing of $D$ by either $A$ or $B$. Making the corresponding split for each crossing gives a number of disjoint embedded closed circles, called state circles. We call a state which possesses maximum number of state circles a maximum state. The maximum state is closely related to the genus of the corresponding link, thus has been studied. In this talk, we will discuss some of the recent progress we have made on this topic.
A look into infinity, a few famous problems, and a little bit of normality.
I will lecture on 32 proofs of a theorem of Euler posed by mistake by Goldbach regarding Zeta(3). See http://www.carma.newcastle.edu.au/jon/goldbach-talk10.pdf.
Power domination problem is a variant of the famous domination problem. It has its application in monitoring of electric power networks. In this talk, we give a literature review of the work done so far and the possible open areas of research. We also introduce two interesting variants of power domination– resolving power domination problem and the propagation problem. We present preliminary work and research plan for future.
Supervisors: Prof. Mirka Miller, Dr Joe Ryan, Prof. Paul D Manuel.
Random walks have been used to model stochastic processes in many scientific fields. I will introduce invariant random walks on groups, where the transition probabilities are given by a probability measure. The Poisson boundary will also be discussed. It is a space associated with every group random walk that encapsulates the behaviour of the walks at infinity and gives a description of certain harmonic functions on the group in terms of the essentially bounded functions on the boundary. I will conclude with a discussion of project aims, namely to compute the boundary for certain random walks in new cases and to investigate the order structure of certain ideals in $L^1(G)$ defined for each invariant random walk.
Supervisors: Prof. George Willis, Dr Jeff Hogan.
In this conference accessible to a large public and particularly to students, we will review the most important contributions of Leonhard Euler in mathematics. We will give a brief biography of Leonhard Euler and a broad survey of his greats achievements.
The Fourier Transform is a central and powerful tool in signal processing as well as being essential to Complex Analysis. However, it is limited to acting on complex-valued functions and thus cannot be applied directly to colour images (which have 3 real values worth of output). In this talk, I discuss the limitations of current methods then discuss several methods of extending the Fourier Transform to larger algebras (specifically the Quaternions and Clifford algebras). This informs a research plan involving the study and computer implementation of a particular Clifford Fourier Transform.
The pooling problem is a nonlinear program (NLP) with applications in the refining and petrochemical industries, but also in mining. While it has been shown that the pooling problem is strongly NP-hard, it is one of the most promising NLPs to be solved to global optimality. In this talk I will discuss strengths and weaknesses of problem formulations and solution techniques. In particular, I will discuss convex and linear relaxations of the pooling problem, and show how they are related to graph theory, polyhedral theory and combinatorial optimization.
A look into an extension on the proof of a class of normal numbers by Davenport and Erdos, as well as a leap into the world of experimental mathematics relating to the property of strong normality, in particular the strong normality of some very famous numbers.
Inspired by the Hadamard Maximal Determinant Problem, we investigate the possible Gram matrices from rectangular {+1, -1} matrices. We can fully classify and count the Gram matrices from rectangular {+1, -1} matrices with just two rows and have conjectured a counting formula for the Gram matrices when there are more than two rows in the original matrix.
We build upon the ideas of short random walks in 2 dimensions in an attempt to understand the behaviours of these objects in higher dimensions. We explore the density and moment functions to find combinatorial and analytical results that generalise nicely.
A history of Pi in the American Mathematical Monthly and the variety of approaches to understanding this stubborn constant. I will focus on the common threads of discussion over the last century, especially the changing methods for computing pi to high precision, to illustrate how we have progressed to our current state.
In this talk I will be exploring certain aspects of permutations of length n that avoid the pattern 1324. This is an interesting pattern in that it is simple yet defies simple analysis. It can be shown that there is a growth rate, yet it cannot be shown what that growth rate is; nor has a explicit formula been found to give the number of permutations of length n which avoid the pattern (whereas this has been found for every other non Wilf-equivalent length 4 pattern). Specifically, this talk will look at how an encoding technique (developed by Bona) of the 1324 avoiding permutations was cleverly used to obtain an upper bound for the growth rate of this class.
The fairness of voting systems has been a topic of interest to mathematicians since 1770 when Marquis de Condorcet proposed the Condorcet criterion, and particularly so after 1951 when Kenneth Arrow proposed the Arrow impossibility theorem, which proved that no rank-order voting system can satisfy all properties one would desire.
The system I have been studying is known as runoff voting. It is a method of voting used around the world, often for presidential elections such as in France. Each voter selects their favourite candidate, and if any candidate receives above 50% of the vote, then they are elected. If no one reaches this, then another election will be held, but this time with only the top 2 candidates from the previous election. Whoever receives more votes in this second round will be elected. The runoff voting system satisfies a number of desired properties, though the running of the second round can have significant drawbacks. It can be very costly, it can result in periods of time without government, and in it has been known to cause unrest in some politically unstable countries.
In my research I have introduced the parameter alpha, which varies the original threshold of 50% for a candidate winning the election in the first round. I am using both analytical methods and simulation to observe how the properties change with alpha.
As an extension of Copeland and Erdos' original paper of the same title, we present a clearer and more complete version of the proof that the number of integers up to $N$ ($N$ sufficiently large) which are not $\left(\eps,k\right)$ normal is less than $N^{\gd}$ where $\gd<1$. We also conjecture that the numbers formed from the concatenation of the increasing sequence $a_{1},a_{2},a_{3},\dots$ (provided the sequence is dense enough) are not strongly normal.
We consider the problem of scattering of waves by a string with attached masses, focussing on the problem in the time-domain. We propose this as a simple model for more complicated wave scattering problems which arise in the study of elastic metamaterials. We present the governing system of equations and show how we have solved them. Some numerical simulations are also presented.
I shall review convergence results for non-convex Douglas-Rachford iterations.
I will summarize the main ingredients and results on classical conjugate duality for optimization problems, as given by Rockafellar in 1973.
Space! For Star Trek fans it’s the final frontier - with all of vastly, hugely, mindboggling room it contains, it allows scientists and researchers of all persuasions to go where no one has gone before and explore worlds not yet explored. Like Star Trek fans, many mathematicians and statisticians are also interested in exploring the dynamics of space. From a statisticians point of view, our often data driven perspective means we are concerned with exploring data that exists in multi-dimensional space and trying to visualise it using as few dimensions as possible.
This presentation will outline the links between the analysis of categorical data, multi-dimensional space, and the reduction of this space. The technique we explore is correspondence analysis and we shall see how eigen- and singular value decomposition fit into this data visualisation technique. We shall therefore look at some of the fundamental aspects of correspondence analysis and the various ways in which categorical data can be visualised.
People who study geometry like to ask the question: "What is the shape of that?" In this case, the word "that" can refer to a variety of things, from triangles and circles to knots and surfaces to the universe we inhabit and beyond. In this talk, we will examine some of my favourite gems from the world of geometry and see the interplay between geometry, algebra, and theoretical physics. And the only prerequisite you will need is your imagination!
Norman Do is, first and foremost, a self-confessed maths geek! As a high school student, he represented Australia at the International Mathematical Olympiad. He completed a PhD at The University of Melbourne, before working at McGill University in Canada. He is currently a Lecturer and a DECRA Research Fellow in the School of Mathematical Sciences at Monash University.
His research lies at the interface of geometry and mathematical physics, although he is excited by almost any flavour of mathematics. Norman is heavily involved in enrichment for school students, regularly lecturing at the National Mathematics Summer School and currently chairing the Australian Mathematical Olympiad Senior Problems Committee.
This event is run in conjunction with the University of Newcastle’s 50th year anniversary celebrationsIn this talk I will discuss a class of systems evolving over two independent variables, which we refer to as "2D". For the class considered, extensions of ODE Lyapunov stability analysis can be made to ensure different forms of stability of the system. In particular, we can describe sufficient conditions for stability in terms of the divergence of a vector Lyapunov function.
This talk will highlight links between topics studied in undergraduate mathematics on one hand and frontiers of current research in analysis and symmetry on the other. The approach will be semi-historical and will aim to give an impression of what the research is about.
Fundamental ideas in calculus, such as continuity, differentiation and integration, are first encountered in the setting of functions on the real line. In addition to topological properties of the line, the algebraic properties of being a group and a field, that the set of real numbers possesses, are also important. These properties express symmetries of the set of real numbers, and it turns out that this combination of calculus, algebra and symmetry extends to the setting of functions on locally compact groups, of which the group of rotations of a sphere and the group of automorphisms of a locally finite graph are examples. Not only do these groups frequently occur in applications, but theorems established prior to 1955 show that they are exactly the groups that support integration and differentiation.
Integration and continuity of functions on the circle and the group of rotations of the circle are the basic ingredients for Fourier analysis, which deals with convolution function algebras supported on the circle. Since these basic ingredients extend to locally compact groups, so do the methods of Fourier analysis, and the study of convolution algebras on these groups is known as harmonic analysis. Indeed, there is such a close connection between harmonic analysis and locally compact groups that any locally compact group may be recovered from the convolution algebras that it carries. This fact has recently been exploited with the creation of a theory of `locally compact quantum groups' that axiomatises properties of the algebras appearing in harmonic analysis and does away with the underlying group.
Locally compact groups have a rich structure theory in which significant advances are also currently being made. This theory divides into two cases: when the group is a connected topological space and when it is totally disconnected. The connected case has been well understood since the solution of Hilbert's Fifth Problem in the 1950's, which showed that they are essentially Lie groups. (Lie groups form the symmetries of smooth structures occurring in physics and underpinned, for example, the prediction of the existence of the Higgs boson.) For a long time it was thought that little could be said about totally disconnected groups in general, although important classes of such groups arising in number theory and as automorphism groups of graphs could be understood using techniques special to those classes. However, a complete general theory of these groups is now beginning to take shape following several breakthroughs in recent years. There is the exciting prospect that an understanding of totally disconnected groups matching that of the connected groups will be achieved in the next decade.
Long before current graphic, visualisation and geometric tools were available, John E. Littlewood, 1885-1977, wrote in his delightful Miscellany:
A heavy warning used to be given [by lecturers] that pictures are not rigorous; this has never had its bluff called and has permanently frightened its victims into playing for safety. Some pictures, of course, are not rigorous, but I should say most are (and I use them whenever possible myself). [[L], p. 53]
Over the past decade, the role of visual computing in my own research has expanded dramatically. In part this was made possible by the increasing speed and storage capabilities and the growing ease of programming of modern multi-core computing environments [BSC]. But, at least as much, it has been driven by my groups paying more active attention to the possibilities for graphing, animating or simulating most mathematical research activities.
I shall describe diverse work from my group in transcendental number theory (normality of real numbers [AB3]), in dynamic geometry (iterative reflection methods [AB]), probability (behaviour of short random walks [BS, BSWZ]), and matrix completion problems (especially, applied to protein conformation [ABT]). While all of this involved significant numerical-symbolic computation, I shall focus on the visual and experimental components.
AB F. Aragon and J.M. Borwein, ``Global convergence of a non-convex Douglas-Rachford iteration.’’ J. Global Optimization. 57(3) (2013), 753{769. DOI 10.1007/s10898-012-9958-4.
AB3 F. Aragon, D. H. Bailey, J.M. Borwein and P.B. Borwein, Walking on real numbers." Mathematical Intelligencer. 35(1) (2013), 42{60. See also http://walks.carma.newcastle.edu.au/.
ABT F. Aragon, J. M.Borwein, and M. Tam, ``Douglas-Rachford feasibility methods for matrix completion problems.’’ ANZIAM Journal. Galleys June 2014. See also http://carma.newcastle.edu.au/DRmethods/.
BSC J.M. Borwein, M. Skerritt and C. Maitland, ``Computation of a lower bound to Giuga's primality conjecture.’’ Integers 13 (2013). Online Sept 2013 at #A67, http://www.westga.edu/~integers/cgi-bin/get.cgi.
BS J.M. Borwein and A. Straub, ``Mahler measures, short walks and logsine integrals.’’ Theoretical Computer Science. Special issue on Symbolic and Numeric Computation. 479 (1) (2013), 4-21. DOI: http://link.springer.com/article/10.1016/j.tcs.2012.10.025.
BSWZ J.M. Borwein, A. Straub, J. Wan and W. Zudilin (with an Appendix by Don Zagier), ``Densities of short uniform random walks.’’ Can. J. Math. 64 (5), (2012), 961-990. http://dx.doi.org/10.4153/CJM-2011-079-2.
L J.E. Littlewood, A mathematician's miscellany, London: Methuen (1953); Littlewood, J. E. and Bollobas, Bela, ed., Littlewood’s miscellany, Cambridge University Press, 1986.
The AMSI Summer School is an exciting opportunity for mathematical sciences students from around Australia to come together over the summer break to develop their skills and networks. Details are available from the 2015 AMSI Summer School website.
Also see the CARMA events page for details of some Summer School seminars, open to all!
We apply the piecewise constant, discontinuous Galerkin method to discretize a fractional diffusion equation with respect to time. Using Laplace transform techniques, we show that the method is first order accurate at the $n$th time level~$t_n$, but the error bound includes a factor~$t_n^{-1}$ if we assume no smoothness of the initial data. We also show that for smoother initial data the growth in the error bound for decreasing time is milder, and in some cases absent altogether. Our error bounds generalize known results for the classical heat equation and are illustrated using a model 1D problem.
In this seminar I will talk on decomposing sequences into maximal palindrome factors and its applications in hairpin analysis of viruses like HIV or TB.
(Computational Mathematics Special Session)
(Groups & Dynamics Special Session)
(Maths Education Special Session)
(Operator Algebra/ Functional Analysis Special Session)
The Mathematical Sciences Institute will host a three day workshop on more effective use of visualization in mathematics, physics, and statistics, from the perspectives of education, research and outreach. This is the second EViMS meeting, following the highly successful one held in Newcastle in November 2012. Our aim for the workshop is to help mathematical scientists understand the opportunities, risks and benefits of visualization, in research and education, in a world where visual content and new methods are becoming ubiquitous.
Visit the conference website for more information.
Multi-objective optimisation is one of the earliest fields of study in operations research. In fact, Francis Edgeworth (1845--1926) and Vilfredo Pareto (1848--1923) laid the foundations of this field of study over one hundred years ago. Many real world-problems involve multiple objectives. Due to conflict between objectives, finding a feasible solution that simultaneously optimises all objectives is usually impossible. Consequently, in practice, decision makers want to understand the trade off between objectives before choosing suitable solution. Thus, generating many or all efficient solutions, i.e., solutions in which it is impossible to improve the value of one objective without a deterioration in the value of at least one other objective, is the primary goal in multi-objective optimisation. In this talk, I will focus on Multi-objective Integer Programs (MOIPs) and explain briefly some new efficient algorithms that I have developed since starting my PhD to solve MOIPs. I also explain some links between the ideas of multi-objective integer programming and other fields of study such as game theory.
Supervisor: Thomas Kalinowski
Supervisor: Thomas Kalinowski
Supervisor: Brailey Sims
Supervisor: Brian Alspach
Tensor trains are a new class of functions which are thought to have some potential to deal with high-dimensional problems. While connected with algebraic geometry the main concepts used are rank-k matrix factorisations. In this talk I will review some basic properties of tensor trains. In particular I will consider algorithms for the solution of linear systems Ax=0. This talk is related to research in progress with Jochen Garcke (Uni Bonn and Fraunhofer Institute) on the solution of the chemical master equation. This talk assumes a basic background in matrix algebra. No background in algebraic geometry is required.
Mixed Littlewood conjecture proposed by de Mathan and Teulie in 2004 states that for every real number $x$ one has $\liminf q * |q|_D * ||qx|| = 0,$ where $|q|_D$ is a so called pseudo norm which generalises the standard p-adic norm. In the talk we'll consider the set mad of potential counterexamples to this conjecture. Thanks to the results of Einsiedler and Kleinbock we already know that the Haudorff dimension of mad is zero, so this set is very tiny. During the talk we'll see that the continued fraction expansion of every element in mad should satisfy some quite restrictive conditions. As one of them we'll see that for these expansions, considered as infinite words, the complexity function can neither grow too fast nor too slow.
We first introduce the notations of pattern sequence, which is defined by the number of (possibly) overlapping occurrences of a given word in the $\langle q,r\rangle$-numeration system. After surveying several properties of pattern sequence, we will give necessary and sufficient criteria for the algebraic independence of their generating functions. As applications, we deduce the linear relations between pattern sequences.
The proofs of the theorem and the corollaries are based on Mahler's method.
Self-avoiding walks are a widely studied model of polymers, which are defined as walks on a lattice where each successive step visits a neighbouring site, provided the site has not already been visited. Despite the apparent simplicity of the model, it has been of much interest to statistical mechanicians and probabilists for over 60 years, and many important questions about it remain open.
One of the most powerful methods to study self-avoiding walks is Monte Carlo simulation. I'll give an overview of the historical developments in this field, and will explain what ingredients are needed for a good Monte Carlo algorithm. I'll then describe how recent progress has allowed for the efficient simulation of truly long walks with many millions of steps. Finally, I'll discuss whether lessons we've learned from simulating self-avoiding walks may be applicable to a wide range of Markov chain Monte Carlo simulations.
The completion with respect to the degree valuation of the field of rational functions over a finite field is often a fruitful analogue to consider when one would like to test ideas, methods and conjectures in Diophantine approximation for the real numbers. In many respects, this setting behaves very similarly to the real numbers, and in particular the metric theory of Diophantine approximation in this setup is well-developed and and in some respects more is known to be true in this setup than in the real numbers. However, natural analogues of other classical theorems in Diophantine approximation fail spectacularly in positive characteristic. In this talk, I will introduce the topic and give old and new results underpinning the similarities and differences of the theories of Diophantine approximation in positive characteristic and in characteristic zero.
We discuss the genesis of symbolic computation, its deployment into computer algebra systems, and the applications of these systems in the modern era.
We will pay special attention to polynomial system solvers and highlight the problems that arise when considering non-linear problems. For instance, forgetting about actually solving, how does one even represent infinite solution sets?
We introduce a subfamily of enlargements of a maximally monotone operator $T$. Our definition is inspired by a 1988 publication of Fitzpatrick. These enlargements are elements of the family of enlargements $\mathbb{E}(T)$ introduced by Svaiter in 2000. These new enlargements share with the $\epsilon$-subdifferential a special additivity property, and hence they can be seen as structurally closer to the $\epsilon$-subdifferential. For the case $T=\nabla f$, we prove that some members of the subfamily are smaller than the $\epsilon$-subdifferential enlargement. In this case, we construct a specific enlargement which coincides with the $\epsilon$-subdifferential.
Joint work with Juan Enrique Martínez Legaz, Mahboubeh Rezaei, and Michel Théra.
One of the key components in the earth’s climate is the formation and melting of sea ice. Currently, we struggle to model correctly this process. One possible explanation for this shortcoming is that ocean waves play a key role and that their effect needs to be include in climate models. I will describe a series of recent experiments which seem to validate this hypothesis and discuss attempts my myself and others to model wave-ice interaction.
Optimization problems involving polynomial functions are of great importance in applied mathematics and engineering, and they are intrinsically hard problems. They arise in important engineering applications such as the sensor network localization problem, and provide a rich and fruitful interaction between algebraic-geometric concepts and modern convex programming (semi-definite programming). In this talk, we will discuss some recent progress of the polynomial (semi-algebraic) optimization with a focus on the intrinsic link between the polynomial structure and the hidden convexity structure. The talk will be divided into two parts. In the first part, we will describe the key results in this new area, highlighting the geometric and conceptual aspects as well as recent work on global optimality theory, algorithms and applications. In the second part, we will explain how the semi-algebraic structure helps us to analyze some important and classical algorithms in optimization such as alternating projection algorithm, proximal point algorithm and Douglas-Rachford algorithm (if time is permitted).
This week I shall finish my discussion about searching graphs by looking at the recent paper by Clarke and MacGillavray that characterizes graphs that are k-searchable.
More than 120 years after their introduction, Lyapunov's so-called First and Second Methods remain the most widely used tools for stability analysis of nonlinear systems. Loosely speaking, the Second Method states that if one can find an appropriate Lyapunov function then the system has some stability property. A particular strength of this approach is that one need not know solutions of the system in order to make definitive statements about stability properties. The main drawback of the Second Method is the need to find a Lyapunov function, which is frequently a difficult task.
Converse Lyapunov Theorems answer the question: given a particular stability property, can one always (in principle) find an appropriate Lyapunov function? In the first installment of this two-part talk, we will survey the history of the field and describe several such Converse Lyapunov Theorems for both continuous and discrete-time systems. In the second instalment we will discuss constructive techniques for numerically computing Lyapunov functions.
Third lecture: metric properties.
I will survey some recent and not-so-recent results surrounding the areas of Diophantine approximation and Mahler's method related to variations of the Chomsky-Schützenberger hierarchy.
It is known that the function s defined on an ordering of the 4^m monomial basis matrices of the real representation of the Clifford algebra Cl(m, m), where s(A) = 0 if A is symmetric, s(A) = 1 if A is skew, is a bent function. It is perhaps less well known that the function t, where t(A) = 0 if A is diagonal or skew, t(A) = 1 otherwise, is also a bent function, with the same parameters as s. The talk will describe these functions and their relation to Hadamard difference sets and strongly regular graphs.
The talk was originally presented at ADTHM 2014 in Lethbridge this year.
In 1976, Ribe showed that if two Banach spaces are uniformly homeomorphic, then their finite dimensional subspaces are similar in some sense. This suggests that properties of Banach spaces which depend only on finitely many vectors should have a purely metric characterization. We will shortly discuss the history of the Ribe program, as well as some recent developments.
In particular:
More than 120 years after their introduction, Lyapunov's so-called First and Second Methods remain the most widely used tools for stability analysis of nonlinear systems. Loosely speaking, the Second Method states that if one can find an appropriate Lyapunov function then the system has some stability property. A particular strength of this approach is that one need not know solutions of the system in order to make definitive statements about stability properties. The main drawback of the Second Method is the need to find a Lyapunov function, which is frequently a difficult task.
Converse Lyapunov Theorems answer the question: given a particular stability property, can one always (in principle) find an appropriate Lyapunov function? In the first installment of this two-part talk, we will survey the history of the field and describe several such Converse Lyapunov Theorems for both continuous and discrete-time systems. In the second instalment we will discuss constructive techniques for numerically computing Lyapunov functions.
This Thursday, sees a return to graph searching in the discrete mathematics instructional seminar. I’ll be looking at characterization results.
The topological and measure structures carried by locally compact groups make them precisely the class of groups to which the methods of harmonic analysis extend. These methods involve study of spaces of real- or complex-valued functions on the group and general theorems from topology guarantee that these spaces are sufficiently large. When analysing particular groups however, particular functions deriving from the structure of the group are at hand. The identity function in the cases of $(\mathbb{R},+)$ and $(\mathbb{Z},+)$ are the most obvious examples, and coordinate functions on matrix groups and growth functions on finitely generated discrete groups are only slightly less obvious.
In the case of totally disconnected groups, compact open subgroups are essential structural features that give rise to positive integer-valued functions on the group. The set of values of $p$ for which the reciprocals of these functions belong to $L^p$ is related to the structure of the group and, when they do, the $L^p$-norm is a type of $\zeta$-function of $p$. This is joint work with Thomas Weigel of Milan.
If you’re enrolled in a BMath or Combined Maths degree or have Maths or Stats as a co-major, you’re invited to the B Math Party.
Come along for free food and soft drinks, meet fellow students and talk to staff about courses. Discover opportunities for summer research, Honours, Higher Degrees and scholarships.
This is the first in a series of lectures on this fascinating group.
Classical umbral calculus was introduced by Blissard in the 1860's and later studied by E. T. Bell and Rota. It is a symbolic computation method that is particularly efficient for proving identities involving elementary special functions such as Bernoulli or Hermite polynomials. I will show the link between this technique and moment representation, and provide examples of its application.
In this talk we consider economic Model Predictive Control (MPC) schemes. "Economic" means that the MPC stage cost models economic considerations (like maximal yield, minimal energy consumption...) rather than merely penalizing the distance to a pre-computed steady state or reference trajectory. In order to keep implementation and design simple, we consider schemes without terminal constraints and costs.
In the first (longer) part of the talk, we summarize recent results on the performance and stability properties of such schemes for nonlinear discrete time systems. Particularly, we present conditions under which one can guarantee practical asymptotic stability of the optimal steady state as well as approximately optimal averaged and transient performance. Here, dissipativity of the underlying optimal control problems and the turnpike property are shown to play an important role (this part is based on joint work with Tobias Damm, Marleen Stieler and Karl Worthmann).
In the second (shorter) part of the talk we present an application of an economic MPC scheme to a Smart Grid control problem (based on joint work with Philipp Braun, Christopher Kellett, Steven Weller and Karl Worthmann). While economic MPC shows good results for this control problem in numerical simulations, several aspects of this application are not covered by the available theory. This is explained in the last part of the talk, along with some suggestions on how to overcome this gap.
8:30 am | Registration, coffee and light breakfast |
9:00 am | Director's Welcome |
9:30 am | Session: "Research at CARMA" |
10:30 am | Morning tea |
11:00 am | Session: "Academic Liasing" |
11:30 am | Session: "Education/Outreach Activities" |
12:30 pm | Lunch |
2:00 pm | Session: "Future of Research at the University" |
2:30 pm | Session: "Future Planning for CARMA" |
3:30 pm | Afternoon tea |
4:00 pm | Session: Talks by members (to 5:20 pm) |
6:00 pm | Dinner |
A locating-total dominating set (LTDS) in a connected graph G is a total dominating set $S$ of $G$ such that for every two vertices $u$ and $v$ in $V(G)-S$, $N(u) \cap S \neq N(v) \cap S$. Determining the minimum cardinality of a locating-total dominating set is called as the locating-total dominating problem and it is denoted as $\gamma_t^l (G)$. We have improved the lower bound obtained by M.A.Henning and N.D.Rad [1]. We have also proved that the bound obtained is sharp for some special families of regular graphs.
[1] M. A. Henning and N. J. Rad, Locating-total dominations in graphs, Discrete Applied Mathematics, 160(2012), 1986-1993.
This forum is a follow-on from the seminar that Professor Willis gave three weeks prior, on maths that seems too good to be true; and his ideas for incorporating the surprisingly and enlivening into what and how we teach: he gave as exemplars the miracles of Pythagoreans triples and eigenvalues. A question raised in the discussion at that seminar was if/how might we use assessment to encourage the kinds of learning we would like. This forum will be an opportunity to further that conversation.
Jeff, Andrew and Massoud have each kindly agreed to give us 5 minute presentations relating to the latter year maths courses that they have recently been teaching, to get our forum started. Jeff may speak on his developments in his new course on Fourier methods, Andrew will talk about some of the innovations that were introduced into Topology in the last few offerings which he has been using and further developing, and Massoud has a range of OR courses he might speak about.
Everyone is encouraged to share examples of their own practice or ideas that they have that may be of interest to others.
The restricted product over $X$ of copies of the $p$-adic numbers $\mathbb{Q}_p$, denoted $\mathbb{Q}_p(X)$, is self-dual and is the natural $p$-adic analogue of Hilbert space. The additive group of this space is locally compact and the continuous endomorphisms of the group are precisely the continuous linear operators on $\mathbb{Q}_p(X)$.
Attempts to develop a spectral theory for continuous linear operators on $\mathbb{Q}_p(X)$ will be described at an elementary level. The Berkovich spectral theory over non-Archimedean fields will be summarised and the spectrum of the linear operator $T$ compared with the scale of $T$ as an endomorphism of $(\mathbb{Q}_p(X),+)$.
The original motivation for this work, which is joint with Andreas Thom (Leipzig), will also be briefly discussed. A certain result that holds for representations of any group on a Hilbert space, proved by operator theoretic methods, can only be proved for representations of sofic groups on $\mathbb{Q}_p(X)$ and it is thought that the difficulty might lie with the lack of understanding of linear operators on $\mathbb{Q}_p(X)$ rather than with non-sofic groups.
Brian Alspach will continue discussing searching graphs embedded on the torus.
Ben will attempt to articulate what he has been meaning to work on. That is, choosing representatives with smallest 1-norm in an effort to find a nice bound on the number of vertices on level 1 of the corresponding rooted almost quasi-regular tree with 1 defect, and other ideas on choosing good representatives.
We present a PSPACE-algorithm to compute a finite graph of exponential size that describes the set of all solutions of equations in free groups with rational constraints. This result became possible due to the recently invented recompression technique of Artur Jez. We show that it is decidable in PSPACE whenever the set of all solutions is finite. If the set of all solutions is finite, then the length of a longest solution is at most doubly exponential.
This talk is based on a joint paper with Artur Jez and Wojciech Plandowski (arXiv:1405.5133 and LNCS 2014, Proceedings CSR 2014, Moscow, June 7 -- 11, 2014).
This week I shall continue the discussion of searching graphs.
Jon Borwein will discuss CARMA's new "Risk and finance study group". Please come and learn about the opportunities. See also http://www.financial-math.org/ and http://www.financial-math.org/blog/.
The Diophantine Problem in group theory can be stated as: is it algorithmically decidable whether an equation whose coefficients are elements of a given group has at least one solution in that group?
The talk will be a survey on this topic, with emphasis on what is known about solving equations in free groups. I will also present some of the algebraic geometry over groups developed in the last 20 years, and the connections to logic and geometry. I will conclude with results concerning the asymptotic behavior of satisfiable homogeneous equations in surface groups.
This week I shall start a series of talks on basic pursuit-evasion in graphs (frequently called cops and robber in the literature). We shall do some topological graph theory leading to an intriguing conjecture, and we'll look at a characterization problem.
We give some background to the metric basis problem (or resolving set) of a graph. We discuss various resolving sets with different conditions forced on them. We mainly stress the ideas of strong metric basis and partition dimension of graphs. We give the necessary literature background on these concepts and some preliminary results. We present our new results obtained so far as part of the research during my candidature. We also list the research problems I propose to study during the remainder of my PhD candidature and we present a tentative timeline of my research activities.
Mathematics can often seen almost too good to be true. This sense that mathematics is marvellous enlivens learning and stimulates research but we tend to let remarkable things pass without remark after we become familiar with them. The miracles of Pythagorean triples and eigenvalues will be highlights of this talk.
The talk will include some ideas of what could be blending into our teaching program.
I shall be describing a largely unexplored concept in graph theory which is, I believe, an ideal thesis topic. I shall be presenting this at the CIMPA workshop in Laos in December.
Colin Reid will present some thoughts on limits of contraction groups.
A vast amount of natural processes can be modelled by partial differential equations involving diffusion operators. The Navier-Stokes equations of fluid dynamics is one of the most popular of such models, but many other equations describing flows involve diffusion processes. These equations are often non-linear and coupled, and theoretical analysis can only provided limited information on the qualitative behaviours of their solutions. Numerical analysis is then used to obtain a prediction of the fluid's behaviour.
In many circumstances, the numerical methods used to approximate the models must satisfy engineering or computational constraints. For examples, in underground flows in porous media (involved in oil recovery, carbon storage or hydrogeology), the diffusions properties of the medium vary a lot between geological layers, and can be strongly skewed in one direction. Moreover, the available meshes used to discretise the equations may be quite irregular. The sheer size of the domain of study (a few kilometres wide) also calls for methods that can be easily parallelised and give good and stable results on relatively large grids. These constraints make the construction and study of numerical methods for diffusion models very challenging.
In the first part of this talk, I will present some numerical schemes, developed in the last 10 years and designed to discretise diffusion equations as encountered in reservoir engineering, with all the associated constraints. In the second part, I will focus on mathematical tools and techniques constructed to analyse the convergence of numerical schemes under realistic hypotheses (i.e. without assuming non-physical smoothness on the data or the solutions). These techniques are based on the adaptation to the discrete setting of functional analysis results used to study the continuous equations.
The eighth edition of the conference series GAGTA (Geometric and Asymptotic Group Theory with Applications) will be held in Newcastle, Australia July 21-25 (Mon-Fri) 2014.
GAGTA conferences are devoted to the study of a variety of areas in geometric and combinatorial group theory, including asymptotic and probabilistic methods, as well as algorithmic and computational topics involving groups. In particular, areas of interest include group actions, isoperimetric functions, growth, asymptotic invariants, random walks, algebraic geometry over groups, algorithmic problems and their complexity, generic properties and generic complexity, and applications to non-commutative cryptography.
Visit the conference web sitefor more information.
Usually, when we want to study permutation groups, we look first at the primitive permutation groups (transitive groups in which point stabilizers are maximal); in the finite case these groups are the basic building blocks from which all finite permutation groups are comprised. Thanks to the seminal O'Nan—Scott Theorem and the Classification of the Finite Simple Groups, the structure of finite primitive permutation groups is broadly known.
In this talk I'll describe a new theorem of mine which extends the O'Nan—Scott Theorem to a classification of all primitive permutation groups with finite point stabilizers. This theorem describes the structure of these groups in terms of finitely generated simple groups.
Lagrange multiplier method is fundamental in dealing with constrained optimization problems and is also related to many other important results.
In these two talks we first survey several different ideas in proving the Lagrange multiplier rule and then concentrate on the variational approach.
We will first discuss the idea, a variational proof the Lagrange multiplier rule in the convex case and then consider the general case and relationship with other results.
These talks are continuation of the e-mail discussions with Professor Jon Borwein and are very informal.
We consider convexity conditions ensuring the monotonicity of right and left Riemann sums of a function $f:[0,1]\rightarrow \mathbb{R}$; applying our results in particular to functions such as
$f(x) =1/\left(1+x^2\right)$.My talk will be on the projection/reflection methods and the application of tools from convex and variational analysis to optimisation problems, and I will talk about my thesis problem which focuses on the following:
Reproducibility is emerging as a major issue for highly parallel computing, in much the same way (and for many of the same reasons) that it is emerging as an issue in other fields of science, technology and medicine, namely the growing numbers of cases where other researchers cannot reproduce published results. This talk will summarize a number of these issues, including the need to carefully document computational experiments, the growing concern over numerical reproducibility and, once again, the need for responsible reporting of performance. Have we learned the lessons of history?
Lagrange multiplier method is fundamental in dealing with constrained optimization problems and is also related to many other important results.
In these two talks we first survey several different ideas in proving the Lagrange multiplier rule and then concentrate on the variational approach.
We will first discuss the idea, a variational proof the Lagrange multiplier rule in the convex case and then consider the general case and relationship with other results.
These talks are continuation of the e-mail discussions with Professor Jon Borwein and are very informal.
(see PDF)
The relentless advance of computer technology, a gift of Moore's Law, and the data deluge available via the Internet and other sources, has been a gift to both scientific research and business/industry. Researchers in many fields are hard at work exploiting this data. The discipline of "machine learning," for instance, attempts to automatically classify, interpret and find patterns in big data. It has applications as diverse as supernova astronomy, protein molecule analysis, cybersecurity, medicine and finance. However, with this opportunity comes the danger of "statistical overfitting," namely attempting to find patterns in data beyond prudent limits, thus producing results that are statistically meaningless.
The problem of statistical overfitting has recently been highlighted in mathematical finance. A just-published paper by the present author, Jonathan Borwein, Marcos Lopez de Prado and Jim Zhu, entitled "Pseudo-Mathematics and Financial Charlatanism," draws into question the present practice of using historical stock market data to "backtest" a new proposed investment strategy or exchange-traded fund. We demonstrate that in fact it is very easy to overfit stock market data, given powerful computer technology available, and, further, without disclosure of how many variations were tried in the design of a proposed investment strategy, it is impossible for potential investors to know if the strategy has been overfit. Hence, many published backtests are probably invalid, and this may explain why so many proposed investment strategies, which look great on paper, later fall flat when actually deployed.
In general, we argue that not only do those who directly deal with "big data" need to be better aware of the methodological and statistical pitfalls of analyzing this data, but those who observe these problems of this sort arising in their profession need to be more vocal about them. Otherwise, to quote our "Pseudo-Mathematics" paper, "Our silence is consent, making us accomplices in these abuses."
We shall finish our look at two-sided group graphs.
The talk will provide a brief overview of the findings of two completed research projects and one ongoing project related to the knowledge and beliefs of teachers of school mathematics. It will consider some existing frameworks for types of teacher knowledge, and the place of teachers’ beliefs and confidence in relation to these, as well as touching on how a broad construct of teacher knowledge might develop.
Many biological environments, both intracellular and extracellular, are often crowded by large molecules or inert objects which can impede the motion of cells and molecules. It is therefore essential for us to develop appropriate mathematical tools which can reliably predict and quantify collective motion through crowded environments.
Transport through crowded environments is often classified as anomalous, rather than classical, Fickian diffusion. Over the last 30 years many studies have sought to describe such transport processes using either a continuous time random walk or fractional order differential equation. For both these models the transport is characterized by a parameter $\alpha$, where $\alpha=1$ is associated with Fickian diffusion and $\alpha<1$ is associated with anomalous subdiffusion. In this presentation we will consider the motion of a single agent migrating through a crowded environment that is populated by impenetrable, immobile obstacles and we estimate $\alpha$ using mean squared displacement data. These results will be compared with computer simulations mimicking the transport of a population of such agents through a similar crowded environment and we match averaged agent density profiles to the solution of a related fractional order differential equation to obtain an alternative estimate of $\alpha$. I will examine the relationship between our estimate of $\alpha$ and the properties of the obstacle field for both a single agent and a population of agents; in both cases $\alpha$ decreases as the obstacle density increases, and that the rate of decrease is greater for smaller obstacles. These very simple computer simulations suggests that it may be inappropriate to model transport through a crowded environment using widely reported approaches including power laws to describe the mean squared displacement and fractional order differential equations to represent the averaged agent density profiles.
More details can be found in Ellery, Simpson, McCue and Baker (2014) The Journal of Chemical Physics, 140, 054108.
I will talk about the geometric properties of conic problems and their interplay with ill-posedness and the performance of numerical methods. This includes some new results on the facial structure of general convex cones, preconditioning of feasibility problems and characterisations of ill-posed systems.
What do the three elements of the title have in common is the utility of using graph searching as a model. In this talk I shall discuss the relatively brief history of graph searching, several models currently being employed, several significant results, unsolved conjectures, and the vast expanse of unexplored territory.
I will survey my career both mathematically and personally offering advice and opinions, which should probably be taken with so many grains of salt that it makes you nauseous. (Note: Please bring with you a sense of humour and all of your preconceived notions of how your life will turn out. It will be more fun for everyone that way.)
The Australian Mathematical Sciences Student Conference is held annually for Australian postgraduate and honours students of any mathematical science. The conference brings students together, gives an opportunity for presentation of work, facilitates dialogue, and encourages collaboration, within a friendly and informal atmosphere.
Visit the conference website for more details.
I am refereeing a manuscript in which a new construction for producing graphs from a group is given. There are some surprising aspects of this new method and that is what I shall discuss.
The additive or linearized polynomials were introduced by Ore in 1933 as an analogy over finite fields to his theory of difference and difference equations over function fields. The additive polynomials over a finite field $F=GF(q)$, where $q=p^e$ for some prime $p$, are those of the form
$f = f_0 x + f_1 x^p + f_2 x^{p^2} + ... + f_m x^{p^m}$ in $F[x]$
They form a non-commutative left-euclidean principal ideal domain under the usual addition and functional composition, and possess a rich structure in both their decomposition structures and root geometries. Additive polynomials have been employed in number theory and algebraic geometry, and applied to constructing error-correcting codes and cryptographic protocols. In this talk we will present fast algorithms for decomposing and factoring additive polynomials, and also for counting the number of decompositions with particular degree sequences.
Algebraically, we show how to reduce the problem of decomposing additive polynomials to decomposing a related associative algebra, the eigenring. We give computationally efficient versions of the Jordan-Holder and Krull-Schmidt theorems in this context to describe all possible factorization. Geometrically, we show how to compute a representation of the Frobenius operator on the space of roots, and show how its Jordan form can be used to count the number of decompositions. We also describe an inverse theory, from which we can construct and count the number of additive polynomials with specified factorization patterns.
Some of this is joint work with Joachim von zur Gathen (Bonn) and Konstantin-Ziegler (Bonn).
It is axiomatic in mathematics research that all steps of an argument or proof are open to scrutiny. However, a proof based even in part on commercial software is hard to assess, because the source code---and sometimes even the algorithm used---may not be made available. There is the further problem that a reader of the proof may not be able to verify the author's work unless the reader has access to the same software.
For this reason open-source software systems have always enjoyed some use by mathematicians, but only recently have systems of sufficient power and depth become available which can compete with---and in some cases even surpass---commercial systems.
Most mathematicians and mathematics educators seem to gravitate to commercial systems partly because such systems are better marketed, but also in the view that they may enjoy some level of support. But this comes at the cost of initial purchase, plus annual licensing fees. The current state of tertiary funding in Australia means that for all but the very top tier of universities, the expense of such systems is harder to justify.
For educators, a problem is making the system available to students: it is known that students get the most use from a system when they have unrestricted access to it: at home as well as at their institution. Again, the use of an open-source system makes it trivial to provide access.
This talk aims to introduce several very powerful and mature systems: the computer algebra systems Sage, Maxima and Axiom; the numerical systems Octave and Scilab; and the assessment system WeBWorK (or as many of those as time permits). We will briefly describe these systems: their history, current status, usage, and comparison with commercial systems. We will also indicate ways in which anybody can be involved in their development. The presenter will describe his own experiences in using these software systems, and his students' attitudes to them.
Depending on audience interests and expertise, the talk might include looking at a couple of applications: geometry and Gr\"obner bases, derivation of Runge-Kutta explicit formulas, cryptography involving elliptic curves and finite fields, or digital image processing.
The talk will not assume any particular mathematics beyond undergraduate material or material with which the audience is comfortable, and will be as polemical as the audience allows.
In this talk we are going to discuss the importance of M-stationary conditions for a special class of one-stage stochastic mathematical programming problem with complementarity constraints (SMPCC, for short). M-stationarity concept is well known for deterministic MPCC problems. Now using the results of deterministic MPCC problems we can easily derive the M-stationarity for SMPCC problems under some well known constraint qualifications. It is well observed that under MPCC-linear independence constraint qualification we obtain strong stationarity conditions at a local minimum, which is a stronger notion than M-stationarity. Same result cab be derived for SMPCC problems under SMPCC-LICQ. Then the question that will arise is: What is the importance to study M-stationarity under the assumption of SMPCC-LICQ. To answer this question we have to discuss sample average approximation (SAA) method, which is a common technique to solve stochastic optimization problems. Here one has to discretize the underlying probability space and then using the strong Law of Large Numbers one has to approximate the expectation functionals. Now the main result of this discussion as follows: If we consider a sequence of M-type Fritz John points of the SAA problems then any accumulation point of this sequence will be an M-stationarity point under SMPCC-LICQ. But this kind of result, in general, does not hold for strong stationarity conditions.
Our aim in this talk is to show that D-gap function can play a pivotal role in developing inexact descent methods to solve monotone variational inequality problem where the feasible set of the variational inequality is a closed convex set rather than just the non-negative orthant. We also focus on the issue of regularization of variational inequality. Freidlander and Tseng has shown in 2007 that by the regularizing the convex objective function by using another convex function which in practice is chosen correctly can make the solution of the problem simpler. Tseng and Freiedlander has provided a criteria for exact regularization of convex optimization problems. In this section we ask the question as to what extent one can extend the idea of exact regularization in the context of variational inequalities. We study this in this talk and we show the central role played by the dual gap function in this analysis.
The need for well-trained secondary mathematics teachers is well documented. In this talk we will discuss strategies we have developed at JCU to address the quality of graduating mathematics teachers. These strategies are broadly grouped as (i) having students develop a sense of how they learn mathematics and the skills they can work on to improve their learning of mathematics, and (ii) the need for specific mathematics content subjects for pre-service secondary mathematics teachers.
I will solve a variety of mathematical problems in Maple. These will come from geometry, number theory, analysis and discrete mathematics.
A companion book chapter is http://carma.newcastle.edu.au/jon/hhm.pdf.
Using gap functions to devise error bounds for some special classes of monotone variational inequality is a fruitful venture since it allows us to obtain error bounds for certain classes of convex optimization problem. It is to be noted that if we take a Hoffman type approach to obtain error bounds to the solution set of a convex programming problem it does not turn out to be fruitful and thus using the vehicle of variational inequality seems fundamental in this case. We begin the discussion by introducing several popular gap functions for variational inequalities like the Auslender gap function and the Fukushima's regularized gap function and how error bounds can be created out of them. We then also spent a brief time with gap functions for variational inequalities with set-valued maps which correspond to the non-smooth convex optimization problems. We then quickly shift our focus on the creating error bounds using the dual gap function which is possibly the only convex gap function known in the literature to the best of our knowledge. In fact this gap function was never used for creating error bounds. Error bounds can be used as stopping criteria and this the dual gap function can be used to solve the variational inequality and also be used to develop a stopping criteria. We present several recent research on error bounds using the dual gap function and also provide an application to quasiconvex optimization.
We begin the talk with the story of Dido and the Brachistochrone problem. We show how these two problems leads to the two must fundamental problems of the calculus of variations. The Brachistochrone problem leads to the basic problem of calculus of variations and that leads to the Euler-Lagrange equation. We show the link between the Euler-Lagrange equations and the laws of classical mechanics.
We also discuss about the Legendre conditions and Jacobi conjugate points which leads to the sufficient conditions for weak local minimum points
.The Dido's problem leads to the problem of Lagrange in which Lagrange introduces his multiplier rule. We also speak a bit about the problem of Bolza and further also discuss about how the class of extremals can be enlarged and the issue of existence of solutions in calculus of variations, the Tonelli's direct methods and some more facts on the quest for multiplier rules.
This talk is a practice talk for an invited talk I will soon give in Indonesia, in which I was asked to present on Education at a conference on Graph Theory.
In 1929 Alfred North Whitehead wrote: "The university imparts information, but it imparts it imaginatively. At least, this is the function it should perform for society. A university which fails in this respect has no reason for existence. This atmosphere of excitement, arising from imaginative consideration, transforms knowledge. A fact is no longer a bare fact: it is invested with all its possibilities."
In the light and inspiration of Whitehead's quote, I will discuss some aspects of the problem and challenge of mathematical education as we meet it in Universities today, with reference to some of the ways that combinatorics may be an ideal vehicle for sharing authentic mathematical experiences with diverse students.
The American mathematical research community experienced remarkable changes over the course of the three decades from 1920 to 1950. The first ten years witnessed the "corporatization" and "capitalization" of the American Mathematical Society, as mathematicians like Oswald Veblen and George Birkhoff worked to raise private, governmental, and foundation monies in support of research-level mathematics. The next decade, marked by the stock market crash and Depression, almost paradoxically witnessed the formation and building up of a number of strongly research-oriented departments across the nation at the same time that noted mathematical refugees were fleeing the ever-worsening political situation in Europe. Finally, the 1940s saw the mobilization of American research mathematicians in the war effort and their subsequent efforts to insure that pure mathematical research was supported as the Federal government began to open its coffers in the immediately postwar period. Ultimately, the story to be told here is a success story, but one of success in the face of many obstacles. At numerous points along the way, things could have turned out dramatically differently. This talk will explore those historical contingencies.
About the speaker:
Karen Parshall is Professor of History and Mathematics at the University of Virginia, where she has served on the faculty since 1988. Her research focuses primarily on the history of science and mathematics in America and in the history of 19th- and 20th-century algebra. In addition to exploring technical developments of algebra—the theory of algebras, group theory, algebraic invariant theory—she has worked on more thematic issues such as the development of national mathematical research communities (specifically in the United States and Great Britain) and the internationalization of mathematics in the nineteenth and twentieth centuries. Her most recent book (co-authored with Victor Katz), Taming the Unknown: A History of Algebra from Antiquity to the Early Twentieth Century, will be published by Princeton University Press in June 2014.
In these two talks chapter I want to talk, both generally and personally, about the use of tools in the practice of modern research mathematics. To focus my attention I have decided to discuss the way I and my research group members have used tools primarily computational (visual, numeric and symbolic) during the past five years. When the tools are relatively accessible I shall exhibit details; when they are less accessible I settle for illustrations and discussion of process.
Long before current graphic, visualisation and geometric tools were available, John E. Littlewood, 1885-1977, wrote in his delightful Miscellany:
A heavy warning used to be given [by lecturers] that pictures are not rigorous; this has never had its bluff called and has permanently frightened its victims into playing for safety. Some pictures, of course, are not rigorous, but I should say most are (and I use them whenever possible myself).
Over the past five years, the role of visual computing in my own research has expanded dramatically. In part this was made possible by the increasing speed and storage capabilities - and the growing ease of programming - of modern multi-core computing environments. But, at least as much, it has been driven by paying more active attention to the possibilities for graphing, animating or simulating most mathematical research activities.
The idea of an almost automorphisms of a tree will be introduced as well as what we are calling quasi-regular trees. I will then outline what I have been doing in regard to the almost automorphisms of almost quasi-regular trees with two valencies and the challenges that come with using more valencies.
In these two talks chapter I want to talk, both generally and personally, about the use of tools in the practice of modern research mathematics. To focus my attention I have decided to discuss the way I and my research group members have used tools primarily computational (visual, numeric and symbolic) during the past five years. When the tools are relatively accessible I shall exhibit details; when they are less accessible I settle for illustrations and discussion of process.
Long before current graphic, visualisation and geometric tools were available, John E. Littlewood, 1885-1977, wrote in his delightful Miscellany:
A heavy warning used to be given [by lecturers] that pictures are not rigorous; this has never had its bluff called and has permanently frightened its victims into playing for safety. Some pictures, of course, are not rigorous, but I should say most are (and I use them whenever possible myself).
Over the past five years, the role of visual computing in my own research has expanded dramatically. In part this was made possible by the increasing speed and storage capabilities - and the growing ease of programming - of modern multi-core computing environments. But, at least as much, it has been driven by paying more active attention to the possibilities for graphing, animating or simulating most mathematical research activities.
This year is the fiftieth anniversary of Ringel's posing of the well-known graph decomposition problem called the Oberwolfach problem. In this series of talks, I shall examine what has been accomplished so far, take a look at current work, and suggest a possible new avenue of approach. The material to be presented essentially will be self-contained.
This is joint work with Geoffrey Lee.
The set of permutations generated by a passing an ordered sequence through a stack of depth 2 followed by an infinite stack in series was shown to be finitely based by Elder in 2005. In this new work we obtain an algebraic generating function for this class, by showing it is in bijection with an unambiguous context-free grammar.
This year is the fiftieth anniversary of Ringel's posing of the well-known graph decomposition problem called the Oberwolfach problem. In this series of talks, I shall examine what has been accomplished so far, take a look at current work, and suggest a possible new avenue of approach. The material to be presented essentially will be self-contained.
In this talk I will present a general method of finding simple groups acting on trees. This process, beginning with any group $G$ acting on a tree, produces more groups known as the $k$-closures of $G$. I will use several examples to highlight the versatility of this method, and I will discuss the properties of the $k$-closures that allow us to find abstractly simple groups.
This year is the fiftieth anniversary of Ringel's posing of the well-known graph decomposition problem called the Oberwolfach problem. In this series of talks, I shall examine what has been accomplished so far, take a look at current work, and suggest a possible new avenue of approach. The material to be presented essentially will be self-contained.
The TELL ME agent based model will simulate personal protective decisions such as vaccination or hand hygiene during an influenza epidemic. Such behaviour may be adopted in response to communication from health authorities, taking into account perceived influenza risk. The behaviour decisions are to be modelled with a combination of personal attitude, average local attitude, the local number of influenza cases and the case fatality rate. The model is intended to be used to understand the effects of choices about how to communicate with citizens about protecting themselves from epidemics. I will discuss the TELL ME model design, the cognitive theory supporting the design and some of the expected problems in building the simulation.
: In this final talk of the sequence we will sketch Blinovsky's recent proof of the conjecture: Whenever n is at least 4k, and A is a set of n numbers with sum 0, then there are at least (n-1) choose (k-1) subsets of size k which have non-negative sum. The nice aspect of the proof is the combination of hypergraph concepts with convex geometry arguments and a Berry-Esseen inequality for approximating the hypergeometric distribution. The not so nice aspect (which will be omitted in the talk) is the amount of very tedious algebraic manipulation that is necessary to verify the required estimates. There are slides for all four MMS talks here.
I will describe the research I have been doing with Fran Aragon and others, using graphical methods to study the properties of real numbers. There will be very few formulas and more pictures and movies.
The Erdos-Ko-Rado (EKR) Theorem is a classical result in combinatorial set theory and is absolutely fundamental to the development of extremal set theory. It answers the following question: What is the maximum size of a family F of k-element subsets of the set {1,2,...,n} such that any two sets in F have nonempty intersection?
In the 1980's Manickam, Miklos and Singhi (MMS) asked the following question: Given that a set A of n real numbers has sum zero, what is the smallest possible number of k-element subsets of A with nonnegative sum? They conjectured that the optimal solutions for this problem look precisely like the extremal families in the EKR theorem. This problem has been open for almost 30 years and many partial results have been proved. There was a burst of activity in 2012, culminating in a proof of the conjecture in October 2013.
This series of talks will explore the basic EKR theorem and discuss some of the recent results on the MMS conjecture.
We show that ESO universal Horn logic (existential second logic where the first order part is a universal Horn formula) is insufficient to capture P, the class of problems decidable in polynomial time. This is true in the presence of a successor relation in the input vocabulary. We provide two proofs -- one based on reduced products of two structures, and another based on approximability theory (the second proof is under the assumption that P is not the same as NP). The difference between the results here and those in (Graedel 1991) is due to the fact that the expressions this talk deals with are at the "structure level", whereas the expressions in (Graedel 1991) are at the "machine level" since they encode machine computations -- a case of "Easier DONE than SAID".
This PhD so far has focussed on two distinct optimisation problems pertaining to public transport, as detailed below:
Within public transit systems, so-called flexible transport systems have great potential to of- fer increases in mobility and convenience and decreases in travel times and operating costs. One such service is the Demand Responsive Connector, which transports commuters from residential ad- dresses to transit hubs via a shuttle service, from where they continue their journey via a traditional timetabled service. We investigate various options for implementing a demand responsive connector and the associated vehicle scheduling problems. Previous work has only considered regional systems, where vehicles drop passengers off at a predetermined station -- we relax that condition and investigate the benefits of allowing alternative transit stations. An extensive computational study shows that the more flexible system offers cost advantages over regional systems, especially when transit services are frequent, or transit hubs are close together, without little impact on customer convenience.
A compliment to public transport systems is that of ad hoc ride sharing, where participants (either offering or requesting rides) are paired with participants wanting the reverse, by some central service provider. Although such schemes are currently in operation, the lack of certainty offered to riders (i.e. the risk of not finding a match, or a driver not turning up) discourages potential users. Critically, this can prevent the system from reaching a "critical mass" and becoming self sustaining. We are investigating the situation where the provider has access to a fleet of dedicated drivers, and may use these to service riders, especially when such a system is in its infancy. We investigate some of the critical pricing issues surrounding this problem, present some optimisation models and provide some computational results.
New questions regarding the reliability and verifiability of scientific findings are emerging as computational methods are being increasingly used in research. In this talk I will present a framework for incorporating computational research into the scientific method, namely standards for carrying out and disseminating research to facilitate reproducibility. I will present some recent empirical results on data and code publication; the pilot project http://ResearchCompendia.org for linking data and codes to published results and validating findings; and the "Reproducible Research Standard" for ensuring the distribution of legally usable data and code. If time permits, I will present preliminary work on assessing the reproducibility of published computational findings based on the 2012 ICERM workshop on Reproducibility in Computational and Experimental Mathematics report [1]. Some of this research is described in my forthcoming co-edited books "Implementing Reproducible Research" and "Privacy, Big Data, and the Public Good."
[1] D. H. Bailey, J. M. Borwein, Victoria Stodden "Set the Default to 'Open'," Notices of the AMS, June/July 2013.
The Erdos-Ko-Rado (EKR) Theorem is a classical result in combinatorial set theory and is absolutely fundamental to the development of extremal set theory. It answers the following question: What is the maximum size of a family F of k-element subsets of the set {1,2,...,n} such that any two sets in F have nonempty intersection?
In the 1980's Manickam, Miklos and Singhi (MMS) asked the following question: Given that a set A of n real numbers has sum zero, what is the smallest possible number of k-element subsets of A with nonnegative sum? They conjectured that the optimal solutions for this problem look precisely like the extremal families in the EKR theorem. This problem has been open for almost 30 years and many partial results have been proved. There was a burst of activity in 2012, culminating in a proof of the conjecture in October 2013.
This series of talks will explore the basic EKR theorem and discuss some of the recent results on the MMS conjecture.
Nowadays huge amounts of personal data are regularly collected in all spheres of life, creating interesting research opportunities but also a risk to individual privacy. We consider the problem of protecting confidentiality of records used for statistical analysis, while preserving as much of the data utility as possible. Since OLAP cubes are often used to store data, we formulate a combinatorial problem that models a procedure to anonymize 2-dimensional OLAP cubes. In this talk we present a parameterised approach to this problem.
The Erdos-Ko-Rado (EKR) Theorem is a classical result in combinatorial set theory and is absolutely fundamental to the development of extremal set theory. It answers the following question: What is the maximum size of a family F of k-element subsets of the set {1,2,...,n} such that any two sets in F have nonempty intersection?
In the 1980's Manickam, Miklos and Singhi (MMS) asked the following question: Given that a set A of n real numbers has sum zero, what is the smallest possible number of k-element subsets of A with nonnegative sum? They conjectured that the optimal solutions for this problem look precisely like the extremal families in the EKR theorem. This problem has been open for almost 30 years and many partial results have been proved. There was a burst of activity in 2012, culminating in a proof of the conjecture in October 2013.
This series of talks will explore the basic EKR theorem and discuss some of the recent results on the MMS conjecture.
Brad Pitt's zombie-attack movie "World War Z" may not seem like a natural jumping-off point for a discussion of mathematics or science, but in fact it was a request I received to review that movie in "The Conversation" and the review I wrote that led me to be invited to give a public lecture on zombies and maths at the Academy of Science next week. This week's colloquium will largely be a preview of that talk, so should be generally accessible.
My premise is that movies and maths have something in common. Both enable a trait which seems to be more highly developed in humans than in any other species, with profound consequences: the desire and capacity to explore possibility-space.
The same mathematical models can let us playfully explore how an outbreak of zombie-ism might play out, or how an outbreak of an infectious disease like measles would spread, depending, in part, on what choices we make. Where a movie gives us deep insight into one possibility, mathematics enables us to explore, at all once, millions of scenarios, and see where the critical differences lie.
I will try to use mathematical models of zombie outbreak to discuss how mathematical modelling and mathematical ideas such as functions and phase transitions might enter the public consciousness in a positive way.
As is well-known semidefinite relaxations of discrete optimization problems can yield excellent bounds on their solutions. We present three examples from our collaborative research. The first addresses the quadratic assignment problem and a formulation is developed which yields the strongest lower bounds known for larger dimensions. Utilizing the latest iterative SDP solver and ideas from verified computing a realistic problem from communications is solved for dimensions up to 512.
A strategy based on the Lovasz theta function is generalized to compute upper bounds on the spherical kissing number utilizing SDP relaxations. Multiple precision SDP solvers are needed and improvements on known results for all kissing numbers in dimensions up to 23 are obtained. Finally, generalizing ideas of Lex Schrijver improved upper bounds for general binary codes are obtained in many cases.
Without convexity the convergence of a descent algorithm can normally only be certified in the weak sense that every accumulation point of the sequence of iterates is critical. This does not at all correspond to what we observe in practice, where these optimization methods always converge to a single limit point, even though convergence may sometimes be slow.
Around 2006 it has been observed that convergence to a single limit can be proved for objective functions having certain analytic features. The property which is instrumental here is called the Lojasiewicz inequality, imported from analytic function theory. While this has been successfully applied to smooth functions, the case of non-smooth functions turns out more difficult. In this talk we obtain some progress for upper-C1 functions. Then we proceed to show that this is not just out of a theoretical sandpit, but has consequences for applications in several fields. We sketch an application in destructive testing of laminate materials.
A lattice rule with a randomly-shifted lattice estimates a mathematical expectation, written as an integral over the s-dimensional unit hypercube, by the average of n evaluations of the integrand, at the n points of the shifted lattice that lie inside the unit hypercube. This average provides an unbiased estimator of the integral and, under appropriate smoothness conditions on the integrand, it has been shown to converge faster as a function of n than the average at n independent random points (the standard Monte Carlo estimator). In this talk, we study the behavior of the estimation error as a function of the random shift, as well as its distribution for a random shift, under various settings. While it is well known that the Monte Carlo estimator obeys a central limit theorem when $n \rightarrow \infty$, the randomized lattice rule does not, due to the strong dependence between the function evaluations. We show that for the simple case of one-dimensional integrands, the limiting error distribution is uniform over a bounded interval if the integrand is non-periodic, and has a square root form over a bounded interval if the integrand is periodic. We find that in higher dimensions, there is little hope to precisely characterize the limiting distribution in a useful way for computing confidence intervals in the general case. We nevertheless examine how this error behaves as a function of the random shift from different perspectives and on various examples. We also point out a situation where a classical central-limit theorem holds when the dimension goes to infinity, we provide guidelines on when the error distribution should not be too far from normal, and we examine how far from normal is the error distribution in examples inspired from real-life applications.
This talk will give an introduction to the Kelper-Coulomb and harmonic oscillator systems fundamental in both the classical and quantum worlds. These systems are related by "coupling constant metamorphosis", a remarkable trick that exchanges the energy of one system with the coupling constant of the other. The trick can be seen to be a type of conformal transformation, that is, a scaling of the underlying metric, that maps "conformal symmetries" to "true symmetries" of a Hamiltonian system.
In this talk I will explain the explain the statements above and discuss some applications of coupling constant metamorphosis to superintegrable systems and differential geometry.
Jonathan Kress of UNSW will be talking about the UNSW experience of using MapleTA for online assignments in Mathematics over an extended period of time.
Ben Carter will be talking about some of the rationale for online assignments, how we're using MapleTA here, and our hopes for the future, including how we might use it as a basis for a flipped classroom approach to some of our teaching.
Polyhedral links, interlinked and interlocked architectures, have been proposed for the description and characterization of DNA and protein polyhedra. Chirality is a very important feature for biomacromolecules. In this talk, we discuss the topological chirality of a type of DNA polyhedral links constructed by the strategy of "n-point stars and a type of protein polyhedral links constructed by "three-cross curves" covering. We shall ignore DNA sequence and use the orientation of the 2 backbone strands of the dsDNA to orient DNA polyhedral links, thus consider DNA polyhedral links as oriented links with antiparallel orientations. We shall ignore protein sequence and view protein polyhedral links as unoriented ones. It is well known that there is a correspondence between alternating links and plane graphs. We prove that links corresponding to bipartite plane graphs have antiparallel orientations, and under these orientations, their writhes are not zero. As a result, the type of right-handed double crossover 4-turn DNA polyhedral links are topologically chiral. We also prove that the unoriented link corresponding to a connected, even, bipartite plane graph has self-writhe 0 and using the Jones polynomial we present a criterion for chirality of unoriented alternating links with self-writhe 0. By applying this criterion we obtain that 3-regular protein polyhedral links are also topologically chiral. Topological chirality always implies chemical chirality, hence the corresponding DNA and protein polyhedra are all chemically chiral. Our chiral criteria may be used to detect the topological chirality of more complicated DNA and protein polyhedral links to be synthesized by chemists and biologists in the future.
Liz will talk about how the UoN could make more use of the flipped classroom. The flipped classroom is an approach where content is provided in advance to students and instead of the traditional lecture the time is spent interacting with students through worked examples etc.
Liz will examine impacts on student learning, but also consider how to make this approach manageable to staff workloads and how lecture theatres design can be altered to facilitate this new way of learning.
It is now known for a number of models of statistical physics in two dimensions (such as percolation or the Ising model) that at their critical point, they do behave in a conformally invariant way in the large-scale limit, and do give rise in this limit to random fractals that can be mathematically described via Schramm's Stochastic Loewner Evolutions.
The goal of the present talk will be to discuss some aspects of what remains valid or should remain valid about such models and their conformal invariance, when one looks at them within a fractal-type planar domain. We shall in particular describe (and characterize) a continuous percolation interface within certain particular random fractal carpets. Part of this talk will be based on joint work with Jason Miller and Scott Sheffield.
I'll discuss the analytic solution to the limit shape problem for random domino tilings and "lozenge" tilings, and in particular try to explain how these limiting surfaces develop facets.
The previous assessment method for MCHA2000 - Mechatronic Systems (which is common to many other courses) allowed students collect marks from assessments and quizzes during the semester and pass the course without reaching a satisfactory level of competency in some topics. In 2013, we obtained permission from the President of Academic Senate to test a different assessment scheme that aimed at preventing students from passing without attaining a minimum level of competency in all topics of the course. This presentation discusses the assessment scheme tested and the results we obtained, which suggest that the proposed scheme makes a difference.
MCHA2000 is a course about modelling, simulation, and analysis of physical system dynamics. It is believed that the proposed model is applicable to other courses.
Bio: A/Prof Tristan Perez, Lecturer of MCHA2000. http://www.newcastle.edu.au/profile/tristan-perez
In this seminar I will review my recent work into Hankel determinants and their number theoretic uses. I will briefy touch on the p-adic evaluation of a particular determinant and comment on how Hankel determinants together with Padé approximants can be used in some irrationality proofs. A fundamental determinant property will be demonstrated and I will show what implications this holds for positive Hankel determinants and where we might go from here.
I will review the creation and development of the concept of number and the role of visualisation in that development. The relationship between innate human capabilities on the one hand and mathematical research and education on the other will be discussed.
We consider a problem of minimising $f_1(x)+f_2(y)$ over $x \in X \subseteq R^n$ and $y \in Y \subseteq R^m$ subject to a number of extra coupling constraints of the form $g_1(x) g_2(y) \geq 0$. Due to these constraints, the problem may have a large number of local minima. For any feasible combination of signs of $g_1(x)$ and $g_2(y)$, the coupled problem is decomposable, and the resulting two problems are assumed to be easily solved. An approach to solving the coupled problem is presented. We apply it to solving coupled monotonic regression problems arising in experimental psychology.
A classical nonlinear PDE used for modelling heat transfer between concentric cylinders by fluid convection and also for modelling porous flow can be solved by hand using a low-order perturbation method. Extending this solution to higher order using computer algebra is surprisingly hard owing to exponential growth in the size of the series terms, naively computed. In the mid-1990's, so-called "Large Expression Management" tools were invented to allow construction and use of so-called "computation sequences" or "straight-line programs" to extend the solution to 11th order. The cost of the method was O(N^8) in memory, high but not exponential.
Twenty years of doubling of computer power allows this method to get 15 terms. A new method, which reduces the memory cost to O(N^4), allows us to compute to N=30. At this order, singularities can reliably be detected using the quotient-difference algorithm. This allows confident investigation of the solutions, for different values of the Prandtl number.
This work is joint with Yiming Zhang (PhD Oct 2013).
Psychologists and other experiment designers are often faced with the task of creating sets of items to be used in factorial experiments. These sets need to be as similar as possible to each other in terms of the items' given attributes. We name this problem Picking Items for Experimental Sets (PIES). In this talk I will discuss how similarity can be defined, mixed integer programs to solve PIES and heuristic methods.
I will also examine the popular integer programming heuristic, the feasibility pump. The feasibility pump aims to find an integer feasible solution for a MIP. I will be showing how using different projection algorithms, including Douglas-Rachford, added randomness and reformulating the projection spaces change the effectiveness of the heuristic.
We develop an integer programming based decision support tool that quickly assesses the throughput of a coal export supply chain for a given level of demand. The tool can be used to rapidly evaluate a number of infrastructures for several future demand scenarios in order to identify a few that should be investigated more thoroughly using a detailed simulation model. To make the natural integer programming model computationally tractable, we exploit problem structure to reduce the number of variables and employ aggregation as well as disaggregation to strengthen the linear programming relaxation. Afterward, we implicitly reformulate the problem to exclude inherent symmetry in the original formulation and use Hall's marriage theorem to ensure its feasibility. Studying polyhedron structure of a sub-problem, we enhance the formulation by generating strong valid inequalities. The integer programming tool is used in a computational study in which we analyze system performance for different levels of demand to identify potential bottlenecks.
In this talk, we provide some characterizations of ultramaximally monotone operators. We establish the Brezis--Haraux condition in the setting of a general Banach space. We also present some characterizations of reflexivity of a Banach space by a linear continuous ultramaximally monotone operator.
Joint work with Jon Borwein.
Times and Dates:
Mon 2 Dec 2013: 10-12, 2-4
Tue 3 Dec 2013: 10-12, 2-4
Wed 4 Dec 2013: 10-12, 2-4
Thu 5 Dec 2013: 10-12, 2-4
Abstract: This will be a short and fast introduction to the field of geometric group theory. Assumed knowledge is abstract algebra (groups and rings) and metric spaces. Topics to be covered include: free groups, presentations, quasiisometry, hyperbolic groups, Dehn functions, growth, amenable groups, cogrowth, percolation, automatic groups, CAT(0) groups, examples: Thompson's group F, self-similar groups (Grigorchuk group), Baumslag-Solitar groups.
This colloquium will explain some of the background and significance of the concept of amenability. Arguments with finite groups frequently, without remark, count the number of elements in a subset or average a function over the group. It is usually important in these arguments that the result of the calculation is invariant under translation. Such calculations cannot be so readily made in infinite groups but the concepts of amenability and translation invariant measure on a group in some ways take their place. The talk will explain this and also say how random walks relate to these same ideas.
The link to the animation of the paradoxical decomposition is here.
We propose and study a new method, called the Interior Epigraph Directions (IED)
method, for solving constrained nonsmooth and nonconvex optimization. The IED
method considers the dual problem induced by a generalized augmented Lagrangian
duality scheme, and obtains the primal solution by generating a sequence of
iterates in the interior of the dual epigraph. First, a deflected subgradient
(DSG) direction is used to generate a linear approximation to the dual
problem. Second, this linear approximation is solved using a Newton-like step.
This Newton-like step is inspired by the Nonsmooth Feasible Directions Algorithm
(NFDA), recently proposed by Freire and co-workers for solving unconstrained,
nonsmooth convex problems. We have modified the NFDA so that it takes advantage
of the special structure of the epigraph of the dual function. We prove that all
the accumulation points of the primal sequence generated by the IED method are
solutions of the original problem. We carry out numerical experiments by using
test problems from the literature. In particular, we study several instances of
the Kissing Number Problem, previously solved by various approaches such as an
augmented penalty method, the DSG method, as well as the popular differentiable
solvers ALBOX (a predecessor of ALGENCAN), Ipopt and LANCELOT. Our experiments
show that the quality of the solutions obtained by the IED method is comparable
with (and sometimes favourable over) those obtained by the other solvers mentioned.
Joint work with Wilhelm P. Freire and C. Yalcin Kaya.
In this talk I will describe an algorithm to do a random walk in the space of all words equal to the identity in a finitely presented group. We prove that the algorithm samples from a well defined distribution, and using the distribution we can find the expected value for the mean length of a trivial word. We then use this information to estimate the cogrowth of the group. We ran the algorithm on several examples -- where the cogrowth series in known exactly our results are in agreement with the exact results. Running the algorithm on Thompson's group $F$, we see behaviour consistent with the hypothesis that $F$ is not amenable.
The scale function plays a key role in the structure theory of totally disconnected locally compact (t.d.l.c.) groups. Whereas it is known that the scale function is continuous when acting on a t.d.l.c. group, analysis of the continuity of the scale in a wider context requires the topologization of the group of continuous automorphisms. Existing topologies for Aut(G) are outlined and shown to be insufficient for guaranteeing the continuity of the scale function. Possible methods of generalising these topologies are explored.
In this talk I will discuss a method of finding simple groups acting on trees. I will discuss the theory behind this process and outline some proofs (time permitting).
A numerical method is proposed for constructing an approximation of the Pareto front of nonconvex multi-objective optimal control problems. First, a suitable scalarization technique is employed for the multi-objective optimal control problem. Then by using a grid of scalarization parameter values, i.e., a grid of weights, a sequence of single-objective optimal control problems are solved to obtain points which are spread over the Pareto front. The technique is illustrated on problems involving tumor anti-angiogenesis and a fed-batch bioreactor, which exhibit bang–bang, singular and boundary types of optimal control. We illustrate that the Bolza form, the traditional scalarization in optimal control, fails to represent all the compromise, i.e., Pareto optimal, solutions.
Joint work with Helmut Maurer.
C. Y. Kaya and H. Maurer, A numerical method for nonconvex multi-objective optimal control problems, Computational Optimization and Applications, (appeared online: September 2013, DOI 10.1007/s10589-013-9603-2)
Our goal is to estimate the rate of growth of a population governed by a simple stochastic model. We may choose (n) sampling times at which to count the number of individuals present, but due to detection difficulties, or constraints on resources, we are able only to observe each individual with fixed probability (p). We discuss the optimal sampling times at which to make our observations in order to approximately maximize the accuracy of our estimation. To achieve this, we maximize the expected volume of information obtained from such binomial observations, that is the Fisher Information. For a single sample, we derive an explicit form of the Fisher Information. However, finding the Fisher Information for higher values of (n) appears intractable. Nonetheless, we find a very good approximation function for the Fisher Information by exploiting the probabilistic properties of the underlying stochastic process and developing a new class of delayed distributions. Both numerical and theoretical results strongly support this approximation and confirm its high level of accuracy.
The split feasibility problem (SFP) consists in finding a point in a closed convex subset of a Hilbert space such that its image under a bounded linear operator belongs to a closed convex subset of another Hilbert space. Since its inception in 1994 by Censor and Elfving, it has received much attention thanks mainly to its applications to signal processing and image reconstruction. Iterative methods can be employed to solve the SFP. One of the most popular iterative method is Byrne's CQ algorithm. However, this algorithm requires prior knowledge (or at least an estimate) of the norm of the bounded linear operator. We introduce a stepsize selection method so that the implementation of the CQ algorithm does not need any prior information regarding the operator norm. Furthermore, a relaxed CQ algorithm, where the two closed convex sets are both level sets of convex functions, and a Halpern-type algorithm are studied under the same stepsize rule, yielding both weak and strong convergence. A more general problem, the Multiple-sets split feasibility problem, will be also presented. Numerical experiments are included to illustrate the applications to signal processing and, in particular, to compressed sensing and wavelet-based signal restoration.
Based on joint works with G. López and H-K Xu.
How do a student’s attitude, learning behaviour and achievement in mathematics or statistics relate to each other and how do these change during the course of their undergraduate degree program? These are some of the questions I have been addressing in a longitudinal study that I have undertaken as part of my PhD research. The questions were addressed by soliciting comments from students several times during their undergraduate degree programs; through an initial attitude survey, course-specific surveys for up to two courses each semester and interviews with students near the end of their degrees. In this talk I will introduce you to the attitudes and learning behaviours of the mathematics students I followed through the three years of my research, and discuss their responses to the completed surveys (attitude and course-specific). To illuminate the general responses obtained from the surveys (1074 students completed the initial attitude survey and 645 course-specific surveys were completed), I will also introduce you to Tom, Paul, Kate and Ben, four students of varying degrees of achievement, who I interviewed near the end of their mathematics degrees.
We analyse local combinatorial structure in product sets of two subsets of a countable group which are "large" with respect to certain classes (not necessarily invariant) means on the group. As an example of such phenomenon, we can mention the result by Bergelson, Furstenberg and Weiss which says that the sumset of two sets of positive density in integers contains locally an almost-periodic set. In this theorem, large sets are the sets of positive density, and a combinatorial structure is an almost periodic set.
The rough Cayley graph is the analogue in the context of topological groups of the standard Cayley graph, which is defined for finitely generated group. It will be shown how it is possible to associate such a graph to a compactly generated totally disconnected and locally compact (t.d.l.c.) group and how the rough Cayley graph represents an important tool to study the structure of this kind of group.
Let $G$ be a connected graph with vertex set $V$ and edge set $E$. The distance $d(u,v)$ between two vertices $u$ and $v$ in $G$ is the length of a shortest $u-v$ path in $G$. For an ordered set $W = \{w_1, w_2, ..., w_k\}$ of vertices and a vertex $v$ in a connected graph $G$, the code of $v$ with respect to $W$ is the $k$-vector \begin{equation} C_W(v)=(d(v,w_1),d(v,w_2), ..., d(v,w_k)). \end{equation} The set $W$ is a resolving set for $G$ if distinct vertices of $G$ have distinct codes with respect to $W$. A resolving set for $G$ containing a minimum number of vertices is called a minimum resolving set or a basis for $G$. The metric dimension, denoted, $dim(G)$ is the number of vertices in a basis for $G$. The problem of finding the metric dimension of an arbitrary graph is NP-complete.
The problem of finding minimum metric dimension is NP-complete for general graphs. Manuel et al. have proved that this problem remains NP-complete for bipartite graphs. The minimum metric dimension problem has been studied for trees, multi-dimensional grids, Petersen graphs, torus networks, Benes and butterfly networks, honeycomb networks, X-trees and enhanced hypercubes.
These concepts have been extended in various ways and studied for different subjects in graph theory, including such diverse aspects as the partition of the vertex set, decomposition, orientation, domination, and coloring in graphs. Many invariants arising from the study of resolving sets in graph theory offer subjects for applicable research.
The theory of conditional resolvability has evolved by imposing conditions on the resolving set. This talk is to recall the concepts and mention the work done so far and future work.
Recently a great deal of attention from biologists has been directed to understanding the role of knots in perhaps the most famous of long polymers - DNA. In order for our cells to replicate, they must somehow untangle the approximately two metres of DNA that is packed into each nucleus. Biologists have shown that DNA of various organisms is non-trivially knotted with certain topologies preferred over others. The aim of our work is to determine the "natural" distribution of different knot-types in random closed curves and compare that to the distributions observed in DNA.
Our tool to understand this distribution is a canonical model of long chain polymers - self-avoiding polygons (SAPs). These are embeddings of simple closed curves into a regular lattice. The exact computation of the number of polygons of length n and fixed knot type K is extremely difficult - indeed the current best algorithms can barely touch the first knotted polygons. Instead of exact methods, in this talk I will describe an approximate enumeration method - which we call the GAS algorithm. This is a generalisation of the famous Rosenbluth method for simulating linear polymers. Using this algorithm we have uncovered strong evidence that the limiting distribution of different knot-types is universal. Our data shows that a long closed curve is about 28 times more likely to be a trefoil than a figure-eight, and that the natural distribution of knots is quite different from those found in DNA.
Popular accounts of evolution typically create an expectation that populations become ever better adapted over time, and some formal treatments of evolutionary processes suggest this too. However, such analyses do not highlight the fact that competition with conspecics has negative population-level consequences too, particularly when individuals invest in success in zero-sum games. My own work is at the interface of theoretical biology and empirical data, and I will discuss several examples where an adaptive evolutionary process leads to something that appears silly from the population point of view, including a heightened risk of extinction in the Gouldian finch, reduced productivity of species in which males do not participate in parental care, and deterministic extinction of local populations in systems that feature sexual parasitism.
Extremal graph theory includes problems of determining the maximum number of edges in a graph on $n$ vertices that contains no forbidden subgraphs. We consider only simple graphs with no loops or multiple edges and the forbidden subgraphs under consideration are cycles of length 3 and 4 (triangle and square). This problem was proposed by Erdos in 1975. Let $n$ denote the number of vertices in a graph $G$. By $ex(n; {C3,C4})$, or simply $ex(n;4)$ we mean the maximum number of edges in a graph of order $n$ and girth at least $g \geq 5$. There are only 33 exact values of $ex(n;4)$ currently known. In this talk, I give an overview of the current state of research in this problem, regarding the exact values, as well as the lower bound and the upper bound of the extremal numbers when the exact value is not known.
Many successful non-convex applications of the Douglas-Rachford method can be viewed as the reconstruction of a matrix, with known properties, from a subset of its entries. In this talk we discuss recent successful applications of the method to a variety of (real) matrix reconstruction problems, both convex and non-convex.
This is joint work with Fran Aragón and Matthew Tam.
I will report on recent joint work (with J.Y. Bello Cruz, H.M. Phan, and X. Wang) on the Douglas–Rachford algorithm for finding a point in the intersection of two subspaces. We prove that the method converges strongly to the projection of the starting point onto the intersection. Moreover, if the sum of the two subspaces is closed, then the convergence is linear with the rate being the cosine of the Friedrichs angle between the subspaces. Our results improve upon existing results in three ways: First, we identify the location of the limit and thus reveal the method as a best approximation algorithm; second, we quantify the rate of convergence, and third, we carry out our analysis in general (possibly infinite-dimensional) Hilbert space. We also provide various examples as well as a comparison with the classical method of alternating projections.
Within a nonzero, real Banach space we study the problem of characterising a maximal extension of a monotone operator in terms of minimality properties of representative functions that are bounded by the Penot and Fitzpatrick functions. We single out a property of the space of representative functions that enable a very compact treatment of maximality and pre-maximality issues. As this treatment does not assume reflexivity and we characterises this property the existence of a counter example has a number of consequences for the search for a suitable certificate for maximality in non-reflexive spaces. In particular one is lead to conjecture that some extra side condition to the usual CQ is inevitable. We go on to look at the simplest such condition which is boundedness of the domain of the monotone operator and obtain some positive results.
TBA
There exist a variety of mechanisms to share indivisible goods between agents. One of the simplest is to let the agents take turns to pick an item. This mechanism is parameterized by a policy, the order in which agents take turns. A simple model of this mechanism was proposed by Bouveret and Lang in 2011. We show that in their setting the natural policy of letting the agents alternate in picking items is optimal. We also present a number of potential generalizations and extensions.
This is joint work with Nina Narodytska and Toby Walsh.
It was understood by Minkowski that one could prove interesting results in number theory by considering the geometry of lattices in R(n). (A lattice is simply a grid of points.) This technique is called the "geometry of numbers". We now understand much more about analysis and dynamics on the space of all lattices, and this has led to a deeper understanding of classical questions. I will review some of these ideas, with emphasis on the dynamical aspects.
Joint work with N. Parikh, E. Chu, B. Peleato, and J. Eckstein
Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features, training examples, or both. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. We argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas-Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, and support vector machines.
The related paper, code and talk slides are available at http://www.stanford.edu/~boyd/papers/admm_distr_stats.html.
It is well known that the Moore digraph, namely a diregular digraph of degree d, diameter k and order 1 + d + d 2 + ... + d k , only exists if d = 1 or k = 1. Let (d,k)-digraph be a diregular digraph of degree d ≥ 2, diameter k ≥ 2 and order d+d 2 +...+d k , one less than the Moore bound. Such a (d,k)-digraph is also called an almost Moore digraph.
The study of the existence of an almost Moore digraph of degree d and diameter k has received much attention. Fiol, Allegre and Yebra (1983) showed the existence of (d,2)-digraphs for all d ≥ 2. In particular, for d = 2 and k = 2, Miller and Fris (1988) enumerated all non-isomophic (2,2)-digraphs. Furthermore, Gimbert (2001) showed that there is only one (d,2)-digraph for d ≥ 3. However for de- gree 2 and diameter k ≥ 3, it is known that there is no (2,k)-digraph (Miller and Fris, 1992). Furthermore, it was proved that there is no (3,k)-digraph with k ≥ 3 (Baskoro, Miller, Siran and Sutton, 2005). Recently, Conde, Gimbert, Gonzáles, Miret, and Moreno (2008 & 2013) showed that no (d,k)-digraphs exist for k = 3,4 and for any d ≥ 2. Thus, the remaining case still open is the existence of (d,k)- digraphs with d ≥ 4 and k ≥ 5.
Several necessary conditions for the existence of (d,k)-digraphs, for d ≥ 4 and k ≥ 5, have been obtained. In this talk, we shall discuss some necessary conditions for these (d,k)-digraphs. Open problems related to this study are also presented.
I will discuss some models of what a "random abelian group" is, and some conjectures (the Cohen-Lenstra heuristics of the title) about how they show up in number theory. I'll then discuss the function field setting and a proof of these heuristics, with Ellenberg and Westerland. The proof is an example of a link between analytic number theory and certain classes of results in algebraic topology ("homological stability").
UPDATE: Abstract submission is now open.
The main thrust of this workshop will be exploring the interface between important methodological areas of infectious disease modelling. In particular, two main themes will be explored: the interface between model-based data analysis and model-based scenario analysis, and the relationship between agent-based/micro-simulation and modelling.
In many problems in control, optimal and robust control, one has to solve global
optimization problems of the form: $\mathbf{P}:f^\ast=\min_{\mathbf x}\{f(\mathbf x):\mathbf x\in\mathbf K\}$, or, equivalently, $f^\ast=\max\{\lambda:f-\lambda\geq0\text{ on }\mathbf K\}$, where $f$ is a polynomial (or even a semi-algebraic function) and $\mathbf K$ is a basic semi-algebraic set. One may even need solve the "robust" version $\min\{f(\mathbf x):\mathbf x\in\mathbf K;h(\mathbf x,\mathbf u)\geq0,\forall \mathbf u\in\mathbf U\}$ where $\mathbf U$ is a set of parameters. For
instance, some static output feedback problems can be cast as polynomial optimization
problems whose feasible set $\mathbf K$ is defined by a polynomial matrix inequality (PMI). And
robust stability regions of linear systems can be modeled as parametrized polynomial
matrix inequalities (PMIs) where parameters $\mathbf u$ account for uncertainties and (decision)
variables x are the controller coefficients.
Therefore, to solve such problems one needs tractable characterizations of polynomials
(and even semi-algebraic functions) which are nonnegative on a set, a topic of independent
interest and of primary importance because it also has implications in many other areas.
We will review two kinds of tractable characterizations of polynomials which are non-negative on a basic closed semi-algebraic set $\mathbf K\subset\mathbb R^n$. The first type of characterization is
when knowledge on $\mathbf K$ is through its defining polynomials, i.e., $\mathbf K=\{\mathbf x:g_j(\mathbf x)\geq 0, j =1,\dots, m\}$, in which case some powerful certificates of positivity can be stated in terms of some sums of squares (SOS)-weighted representation. For instance, this allows to define a hierarchy fo semidefinite relaxations which yields a monotone sequence of lower bounds
converging to $f^\ast$ (and in fact, finite convergence is generic). There is also another way
of looking at nonnegativity where now knowledge on $\mathbf K$ is through moments of a measure
whose support is $\mathbf K$. In this case, checking whether a polynomial is nonnegative on $\mathbf K$
reduces to solving a sequence of generalized eigenvalue problems associated with a count-
able (nested) family of real symmetric matrices of increasing size. When applied to $\mathbf P$, this
results in a monotone sequence of upper bounds converging to the global minimum, which
complements the previous sequence of upper bounds. These two (dual) characterizations
provide convex inner (resp. outer) approximations (by spectrahedra) of the convex cone
of polynomials nonnegative on $\mathbf K$.
Joint work with M. Mueller, B. O'Donoghue, and Y. Wang
We consider dynamic trading of a portfolio of assets in discrete periods over a finite time horizon, with arbitrary time-varying distribution of asset returns. The goal is to maximize the total expected revenue from the portfolio, while respecting constraints on the portfolio such as a required terminal portfolio and leverage and risk limits. The revenue takes into account the gross cash generated in trades, transaction costs, and costs associated with the positions, such as fees for holding short positions. Our model has the form of a stochastic control problem with linear dynamics and convex cost function and constraints. While this problem can be tractably solved in several special cases, such as when all costs are convex quadratic, or when there are no transaction costs, our focus is on the more general case, with nonquadratic cost terms and transaction costs.
We show how to use linear matrix inequality techniques and semidefinite programming to produce a quadratic bound on the value function, which in turn gives a bound on the optimal performance. This performance bound can be used to judge the performance obtained by any suboptimal policy. As a by-product of the performance bound computation, we obtain an approximate dynamic programming policy that requires the solution of a convex optimization problem, often a quadratic program, to determine the trades to carry out in each step. While we have no theoretical guarantee that the performance of our suboptimal policy is always near the performance bound (which would imply that it is nearly optimal) we observe that in numerical examples the two values are typically close.
20 minute presentation followed by 10 minutes of questions and discussion.
TBA
An exact bucket indexed (BI) mixed integer linear programming formulation for nonpreemptive single machine scheduling problems is presented that is a result of an ongoing investigation into strategies to model time in planning applications with greater efficacy. The BI model is a generalisation of the classical time indexed (TI) model to one in which at most two jobs can be processing in each time period. The planning horizon is divided into periods of equal length, but unlike the TI model, the length of a period is a parameter of the model and can be chosen to be as long as the processing time of the shortest job. The two models are equivalent if the problem data are integer and a period is of unit length, but when longer periods are used in the BI model, it can have significantly fewer variables and nonzeros than the TI model at the expense of a greater number of constraints. A computational study using weighted tardiness instances reveals the BI model significantly outperforms the TI model on instances where the mean processing time of the jobs is large and the range of processing times is small, that is, the processing times are clustered rather than dispersed.
Joint work with Natashia Boland and Riley Clement.
Random matrix theory has undergone significant theoretical progress in the last two decades, including proofs on universal behaviour of eigenvalues as the matrix dimension becomes large, and a deep connection between algebraic manipulations of random matrices and free probability theory. Underlying many of the analytical advances are tools from complex analysis. By developing numerical versions of these tools, it is now possible to calculate random matrix statistics to high accuracy, leading to new conjectures on the behaviour of random matrices. We overview recent advances in this direction.
The degree/diameter problem is to find the largest possible order of a graph (or digraph) with given maximum degree (or maximum out-degree) and given diameter. This is one of the unsolved problems in Extremal Graph Theory. Since the general problem is difficult many variations of the problem have been considered, including bipartite, vertex-transitive, mixed, planar, etc.
This talk is part of a series started in May. The provisional schedule is
I will talk about the metrical theory of Diophantine approximation associated with linear forms that are simultaneously small in terms of absolute value rather than the classical nearest integer norm. In other words, we consider linear forms which are simultaneously close to the origin. A complete Khintchine-Groshev type theorem for monotonic approximating functions is established within the absolute value setup. Furthermore, the Hausdorff measure generalization of the Khintchine-Groshev type theorem is obtained. As a consequence we obtain the complete Hausdorff dimension theory. Staying within the absolute value setup, we prove that the corresponding set of badly approximable vectors is of full dimension.
Joint work with David Wood (Monash University, Australia) and Eran Nevo (Ben-Gurion University of the Negev, Israel).
The maximum number of vertices of a graph of maximum degree $\Delta\ge 3$ and diameter $k\ge 2$ is upper bounded by $\Delta^{k}$. If we restrict our graphs to certain classes, better upper bounds are known. For instance, for the class of trees there is an upper bound of $2\Delta^{\lfloor k/2\rfloor}$. The main result of this paper is that, for large $\Delta$, graphs embedded in surfaces of bounded Euler genus $g$ behave like trees. Specifically, we show that, for large $\Delta$, such graphs have orders bounded from above by
\begin{cases} (c_0g+c_1)\Delta^{\lfloor k/2\rfloor} & \text{if $k$ is even}\\
(c_0g^2+c_1)\Delta^{\lfloor k/2\rfloor} & \text{if $k$ is odd}
\end{cases}
where $c_0,c_1$ are absolute constants.
With respect to lower bounds, we construct graphs of Euler genus $g$, odd diameter and orders $(c_0\sqrt{g}+c_1)\Delta^{\lfloor k/2\rfloor}$, for absolute constants $c_0,c_1$.
Our results answer in the negative a conjecture by Miller and Širáň (2005). Before this paper, there were constructions of graphs of Euler genus $g$ and orders $c_0\Delta^{\lfloor k/2\rfloor}$ for an absolute constant $c_0$. Also, Šiagiová and Simanjuntak (2004) provided an upper bound of $(c_0g+c_1)k\Delta^{\lfloor k/2\rfloor}$ with absolute constants $c_0,c_1$.
In his deathbed letter to G.H. Hardy, Ramanujan gave a vague definition of a mock modular function: at each root of unity its asymptotics matches the one of a modular form, though a choice of the modular function depends on the root of unity. Recently Folsom, Ono and Rhoades have proved an elegant result about the match for a general family related to Dyson’s rank (mock theta) function and the Andrews—Garvan crank (modular) function. In my talk I will outline some heuristics and elementary ingredients of the proof.
(Joint work with Konrad Engel and Martin Savelsbergh)
In an incremental network design problem we want to expand an existing network over several time periods, and we are interested in some quality measure for all the intermediate stages of the expansion process. In this talk, we look at the following simple variant: In each time period, we are allowed to add a single edge, the cost of a network is the weight of a minimum spanning tree, and the objective is to minimize the sum of the costs over all time periods. We describe a greedy algorithm for this problem and sketch a proof of the fact that it provides an optimal solution. We also indicate that incremental versions of other basic network optimization problems (shortest path and maximum flow) are NP-hard.
Image processing research is dominated, to a considerable degree, by linear-additive models of images. For example, wavelet decompositions are very popular both with experimentalists and theoreticians primarily because of their neatly convergent properties. Fourier and orthogonal series decompositions are also popular in applications, as well as playing an important part in the analysis of wavelet methods.
Multiplicative decomposition, on the other hand, has had very little use in image processing. In 1-D signal processing and communication theory it has played a vital part (amplitude, phase, and frequency modulations of communications theory especially).
In many cases 2-D multiplicative decompositions have just been too hard to formulate or expand. Insurmountable problems (divergences) often occur as the subtle consequences of unconscious errors in the choice of mathematical structure. In my work over the last 17 years I've seen how to overcome some of the problems in 2-D, and the concept of phase is a central, recurring theme. But there is still so much more to be done in 2-D and higher dimensions.
This talk will be a whirlwind tour of some main ideas and applications of phase in imaging.
Let spt(n) denote the number of smallest parts in the partitions of n. In 2008, Andrews found surprising congruences for the spt-function mod 5, 7 and 13. We discuss new congruences for spt(n) mod powers of 2. We give new generating function identities for the spt-function and Dyson's rank function. Recently with Andrews and Liang we found a spt-crank function that explains Andrews spt-congruences mod 5 and 7. We extend these results by finding spt-cranks for various overpartition-spt-functions of Ahlgren, Bringmann, Lovejoy and Osburn. This most recent work is joint with Chris Jennings-Shaffer.
The aim of this Douglas-Rachford brainstorming session to discuss:
-New applications and large scale experiments
-Diagnosing and profiling successful non-convex applications
-New conjectures
-Anything else you may think is relevant
Universities are facing a tumultuous time with external regulation through TEQSA and the rise of MOOCs (Massive Open Online Courses). Disciplines within universities face the challenge of doing research, as well as producing a range of graduates capable of undertaking diverse careers. These are not new challenges. The emergence of MOOCs has raised the question, 'Why go to a University?' These tumultuous times provide a threat as well as an opportunity. How do we balance our activities? Does teaching and learning need to be re-conceptuliased? Is it time to seriously consider the role of education and the 'value-add' university education provides? This talk will provide snapshots of work that demonstrate the value-add universities do provide. Evidence is used to challenge current understandings and to chart a way forward.
The talk will be about new results on modular forms obtained by the speaker in collaboration with Shaun Cooper.
Do you every wonder what goes on behind the closed doors of some of your professors? Or colleagues? What kind of stuff can I do for my Honours degree? Or my RHD studies? Well, let these wonders cease!
This sequence of talks will expose the greatest (mathematical) desires of mathematicians at Newcastle, highlighting several areas of current research from the purest of the pure to the most applicable of the applied. Talks will aim to be accessible to undergraduates (mostly), or anyone with a desire to learn more mathematics.
Programme:The feasibility problem associated with nonempty closed
convex sets $A$ and $B$ is to find some $x\in A \cap B$.
Projection algorithms in general aim to compute such a point.
These algorithms play key roles in optimization and have many applications outside mathematics - for example in medical imaging.
Until recently convergence results were only available in the setting of linear spaces (more particularly, Hilbert spaces) and where the two sets are closed and convex.
The extension into geodesic metric spaces allows their use in spaces where there is no natural linear structure, which is the case for instance in tree spaces, state spaces, phylogenomics and configuration spaces for robotic movements.
After reviewing the pertinent aspects of CAT(0) spaces introduced in Part I, including results for von Neumann's alternating projection method, we will focus on the Douglas-Rachford algorithm, in CAT(0) spaces. Two situations arise; spaces with constant curvature and those with non-constant curvature. A prototypical space of the later kind will be introduced and the behavior of the Douglas-Rachford algorithm within it examined.
Do you every wonder what goes on behind the closed doors of some of your professors? Or colleagues? What kind of stuff can I do for my Honours degree? Or my RHD studies? Well, let these wonders cease!
This sequence of talks will expose the greatest (mathematical) desires of mathematicians at Newcastle, highlighting several areas of current research from the purest of the pure to the most applicable of the applied. Talks will aim to be accessible to undergraduates (mostly), or anyone with a desire to learn more mathematics.
Program:I will discuss Symmetric criticality and the Mountain pass lemma. I will provide the needed background for anyone who did not come to Part 1.
This talk is available at http://carma.newcastle.edu.au/jon/symva-talk.pdf and the related paper is at http://carma.newcastle.edu.au/jon/symmetry.pdf. It has recently appeared in Advances in Nonlinear Analysis.
I will report on work I performed with Jim Zhu over the past three years on how to exploit different forms of symmetry in variational analysis. Various open problems will be flagged.
This talk is available at http://carma.newcastle.edu.au/jon/symva-talk.pdf and the related paper is at http://carma.newcastle.edu.au/jon/symmetry.pdf. It has recently appeared in Advances in Nonlinear Analysis.
This talk deals with problems that are asymptotically related to best-packing and best-covering. In particular, we discuss how to efficiently generate N points on a d-dimensional manifold that have the desirable qualities of well-separation and optimal order covering radius, while asymptotically having a prescribed distribution. Even for certain small numbers of points like N=5, optimal arrangements with regard to energy and polarization can be a challenging problem.
Yes! Finally there is some discrete maths in the high school curriculum! Well, perhaps.
In this talk I will go over the inclusion of discrete mathematics content in the new national curriculum, the existing plans for its implementation, what this will mean for high school teachers, and brainstorm ideas for helping out, if they need our help. I will also talk about "This is Megamathematics" and perhaps, if we have time, we can play a little bit with "Electracity".
The finite element method has become the most powerful approach in approximating solutions of partial differential equations arising in modern engineering and physical applications. We present some efficient finite element methods for Reissner-Mindlin, biharmonic and thin plate equations.
In the first part of the talk I present some applied partial differential equations, and introduce the finite element method using the biharmonic equation. In the second part of the talk I will discuss about the finite element method for Reissner-Mindlin, biharmonic and thin plate spline equations in a unified framework.
I will explain how the probabalistic method can be used to obtain lower bounds for the Hadamard maximal determinant problem, and outline how the Lovasz local lemma (Alon and Spencer, Corollary 5.1.2) can be used to improve the lower bounds.
This is a continuation of last semester's lectures on the probabilistic method, but is intended to be self-contained.
Overview of Course Content
The classical regularity theory is centred around the implicit and Lyusternik-Graves theorems, on the one hand, and the Sard theorem and transversality theory, on the other. The theory (and a number of its applications to various problems of variational analysis) to be discussed in the course deals with similar problems for non-differentiable and set-valued mappings. This theory grew out of demands that came from needs of (mainly) optimization theory and subsequent understanding that some key ideas of the classical theory can be naturally expressed in purely metric terms without mention of any linear and/or differentiable structures.
Topics to be covered
The "theory" part of the course consists of five sections:
Formally, for understanding of the course basic knowledge of functional analysis plus some acquaintance with convex analysis and nonlinear analysis in Banach spaces (e.g. Frechet and Gateaux derivatives, implicit function theorem) will be sufficient. Understanding of the interplay between analytic and geometric concepts would be very helpful.
We show that a combination of two simple preprocessing steps would generally improve the conditioning of a homogeneous system of linear inequalities. Our approach is based on a comparison among three different notions of condition numbers for linear inequalities.
The talk is based on a joint work with Javier Peña and Negar Soheili (Carnegie-Mellon University).
Roughly speaking, an automorphism $a$ of a graph $G$ is geometric if there is a drawing $D$ of $G$ such that $a$ induces a symmetry of $D$; if $D$ is planar then a is planar. In this talk we discuss geometric and planar automorphisms. In particular we sketch a linear time algorithm for finding a planar drawing of a planar graph with maximum symmetry.
Complex (and Dynamical) Systems
A Data-Based View of Our World
Population censuses and the human face of Australia
Scientific Data Mining
Earth System Modeling
Mitigating Natural Disaster Risk
Sustainability – Environmental modelling
BioInvasion and BioSecurity
Realising Our Subsurface Potential
Abstract submission closes 31st May, 2013.
For more information, visit the conference website.
In a recent referee report, the referee said he/she could not understand the proofs of either of the two main results. Come and judge for yourself! This is joint work with Darryn Bryant and Don Kreher.
Geodesic metric spaces provide a setting in which we can develop much of nonlinear, and in particular convex, analysis in the absence of any natural linear structure. For instance, in a state space it often makes sense to speak of the distance between two states, or even a chain of connecting intermediate states, whereas the addition of two states makes no sense at all.
We will survey the basic theory of geodesic metric spaces, and in particular Gromov's so called CAT($\kappa$) spaces. And if there is time (otherwise in a later talk), we will examine some recent results concerning alternating projection type methods, principally the Douglas--Rachford algorithm, for solving the two set feasibility problem in such spaces.
Given a set T of the Euclidean space, whose elements are called sites, and a particular site s, the Voronoi cell of s is the set formed by all points closer to s than to any other site. The Voronoi diagram of T is the family of Voronoi cells of all the elements of T. In this talk we show some applications of the Voronoi diagrams of finite and infinite sets and analyze direct and inverse problems concerning the cells. We also discuss the stability of the cells under different types of perturbations and the effect of assigning weights to the sites.
You are invited to a celebration of the 21st anniversary of the Factoring Lemma. This lemma was the key to solving some long-standing open problems, and was the starting point of an investigation of totally disconnected, locally compact groups that has ensued over the last 20 years. In this talk, the life of the lemma will described from its conception through to a very recent strengthening of it. It will be described at a technical level, as well as viewed through its relationships with topology, geometry, combinatorics, algebra, linear algebra and research grants.
A birthday cake will be served afterwards.
Please make donations to the Mathematics Prize Fund in lieu of gifts.
In trajectory optimization, the optimal path of a flight system or a group of flight systems is searched for, often in an interplanetary setting: we are in search of trajectories for one or more spacecrafts. On the one hand, this is a well-developed field of research, in which commercial software packages are already available for various scenarios. On the other hand, the computation of such trajectories can be rather demanding, especially when low-thrust missions with long travel times (e.g., years) are considered. Such missions invariably involve gravitational slingshot maneuvers at various celestial bodies in order to save propellant or time. Such maneuvers involve vastly different time scales: years of coasting can be followed by course corrections on a daily basis. In this talk, we give an overview over trajectory optimization for space vehicles and highlight some recent algorithmic developments.
Presenters: Judy-anne Osborn, Ben Brawn, Mick Gladys.
Eric Mazur is a Harvard physicist who has become known for the strategies that he introduced for teaching large first year service (physics) classes, in such a way that seems to improve the students' conceptual understanding of the material whilst not hurting their exam performance. The implementation of the ideas include the use of clicker-like technology (Mick Gladys will talk about his own implementation using mobile phones) as well as lower tech card-based analogues. We will screen a Youtube video showing Professor Mazur explain his ideas, and then describe how we have adapted some of them in maths and physics.
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html.
W. T. Tutte published a paper in 1963 entitled "How to Draw a Graph". Tutte's motivation was mathematical, and his paper can be seen as a contribution to the long tradition of geometric representations of combinatorial objects.
Over the following 40-odd years, the motivation for creating visual representations of graphs has changed from mathematical curiosity to visual analytics. Current demand for graph drawing methods is now high, because of the potential for more human-comprehensible visual forms in industries as diverse as biotechnology, homeland security and sensor networks. Many new methods have been proposed, tested, implemented, and found their way into commercial tools. This paper describes two strands of this history: the force directed approach, and the planarity approach. Both approaches originate in Tutte's paper.
Further, we demonstrate number of methods for graph visualization that can be derived from the weighted version of Tutte's method. These include results on clustered planar graphs, edge-disjoint paths, an animation method, interactions such as adding/deleting vertices/edges and a focus-plus-context view method.
In this talk, we study the rate of convergence of the cyclic projection algorithm applied to finitely many semi-algebraic convex sets. We establish an explicit convergence rate estimate which relies on the maximum degree of the polynomials that generate the semi-algebraic convex sets and the dimension of the underlying space. We achieve our results by exploiting the algebraic structure of the semi-algebraic convex sets.
This is the joint work with Jon Borwein and Guoyin Li.
Degree/diameter problem in graph theory is a theoretical problem which has applications in network design. The problem is to find the maximum possible number of nodes in a network with the limitations on the number of links attached to any node and also the limitation on the largest number of links that should be traversed when a message is sent from one node to another inside the network. An upper bound, known as the Moore bound, is given to this problem. The graphs that obtain the bound are called Moore graphs.
In this talk we give an overview of the existing Moore graphs and we discuss the existence of a Moore graph of degree 57 with diameter 2 which has been an open problem for more than 50 years.
Network infrastructures are a common phenomenon. Network upgrades and expansions typically occur over time due to budget constraints. We introduce a class of incremental network design problems that allow investigation of many of the key issues related to the choice and timing of infrastructure expansions and their impact on the costs of the activities performed on that infrastructure. We examine three variants: incremental network design with shortest paths, incremental network design with maximum flows, and incremental design with minimum spanning trees. We investigate their computational complexity, we analyse the performance of natural heuristics, we derive approximation algorithms and we study integer program formulations.
Our most recent computations tell us that any counterexample to Giuga’s 1950 primality conjecture must have at least 19,907 digits. Equivalently, any number which is both a Giuga and a Carmichael number must have at least 19,907 digits. This bound has not been achieved through exhaustive testing of all numbers with up to 19,907 digits, but rather through exploitation of the properties of Giuga and Carmichael numbers. We introduce the conjecture and an algorithm for finding lower bounds to a counterexample, then present our recent results and discuss challenges to further computation.
We continue on the Probabilistic Method, looking at Chapter 4 of Alon and Spencer. We will consider the second moment, the Chebyshev's inequality, Markov's inequality and Chernoff's inequality.
We will discuss the substantial mathematical, computational, historical and philosophical aspects of this celebrated and controversial theorem. Much of this talk should be accessible to undergraduates, but we will also discuss some of the crucial details of the actual revision by Robertson, Sanders, Seymour and Thomas of the original Appel-Haken computer proof. We will additionally cover recent new computer proofs by Gonthier, and by Steinberger, and also the generalisations of the theorem by Hajos and Hadwiger which are currently still open. New software developed by the speaker will be used to visually illustrate many of the subtle points involved, and we will examine the air of controversy that still surrounds existing computer proofs. Finally, the prospect of a human proof will be canvased.
ABOUT THE SPEAKER: Mr Michael Reynolds has a Masters degree in Maths and an extensive experience in Software Industry. He is currently doing his PhD in Graph Theory at the University of Newcastle.
In response to a recent report from Australia's Chief Scientist (Prof Ian Chubb), the Australian government recently sought applications from consortia of universities (and other interested parties) interested in developing pre-service programs that will improve the quality of mathematics and science school teachers. In particular, the programs should:
At UoN, a group of us from Education and MAPS produced the outline of a vision for our own BTeach/BMath program which builds on local strengths. In the context of very tight timelines, this became a part of an application together with five other universities. In this seminar we will outline the vision that we produced, and invite further contributions and participation, with a view to improving the BMath/BTeach program regardless of the outcome of the application of which we are a part.
In this talk we introduce a Douglas-Rachford inspired projection algorithm, the cyclic Douglas-Rachford iteration scheme. We show, unlike the classical Douglas-Rachford scheme, that the method can be applied directly to convex feasibility problems in Hilbert space without recourse to a product space formulation. Initial results, from numerical experiments comparing our methods to the classical Douglas-Rachford scheme, are promising.
This is joint work with Prof. Jonathan Borwein.
Spatial patterns of events that occur on a network of lines, such as traffic accidents recorded on a street network, present many challenges to a statistician. How do we know whether a particular stretch of road is a "black spot", with a higher-than-average risk of accidents? How do we know which aspects of road design affect accident risk? These important questions cannot be answered satisfactorily using current techniques for spatial analysis. The core problem is that we need to take account of the geometry of the road network. Standard methods for spatial analysis assume that "space" is homogeneous; they are inappropriate for point patterns on a linear network, and give fallacious results. To make progress, we must abandon some of the most cherished assumptions of spatial statistics, with far-reaching implications for statistical methodology.
The talk will describe the first few steps towards a new methodology for analysing point patterns on a linear network. Ingredients include stochastic processes, discrete graph theory and classical partial differential equations as well as statistical methodology. Examples come from ecology, criminology and neuroscience.
Graph automatic groups are an extension of the notion of an automatic group, introduced by Kharlampovich, Khoussainov and Miasnikov in 2011, with the intention to capture a wider class of groups while preserving computational properties such as having quadratic time word problem. We extend the notion further by replacing regular with more general language classes. We prove that nonsolvable Baumslag-Solitar groups are (context free)-graph automatic, (context sensitive)-graph automatic implies a context-sensitive word problem and conversely groups with context sensitive word problem are (context sensitive)-automatic. Finally an obstruction to (context sensitive)-graph automatic implying polynomial time word problem is given.
This is joint work with Jennifer Taback, Bowdoin College.
Vulnerability is the resistance of a network after any disruptions in its links or nodes. Since any network can be modelled by a graph, many vulnerability measures were defined to observe the resistance of networks. For this purpose vulnerability measures such as connectivity,integrity, toughness etc., have been studied widely over all vertices of a graph. In recent many researches began to study on vulnerability measures on graphs over vertices or edges which have a special property rather than over all vertices of the graph.
Independent domination, connected domination and total domination measures are examples of such these measures. Total Accessibility number of a graph is defined as a new measure by choosing the accessible sets $S \subset V$ which have a special property accesibility. Total Accessibility number of a graph G is based on the accessibility number of a graph. The subsets S are accessible sets of the graph. Accessibility number of any connected graph G is a concept based on neighborhood relation between any two vertices by using another vertex connected to both these two vertices.
We introduce and study a new dual condition which characterizes zero duality gap in nonsmooth convex optimization. We prove that our condition is weaker than all existing constraint qualifications, including the closed epigraph condition. Our dual condition was inspired by, and is weaker than, the so-called Bertsekas’ condition for monotropic programming problems. We give several corollaries of our result and special cases as applications. We pay special attention to the polyhedral and sublinear cases, and their implications in convex optimization.
This research is a joint work with Jonathan M. Borwein and Liangjin Yao.
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html.
The classical prolate spheroidal wavefunctions (prolates) arise when solving the Helmholtz equation by separation of variables in prolate spheroidal coordinates. They interpolate between Legendre polynomials and Hermite functions. In a beautiful series of papers published in the Bell Labs Technical Journal in the 1960's, they were rediscovered by Landau, Slepian and Pollak in connection with the spectral concentration problem. After years spent out of the limelight while wavelets drew the focus of mathematicians, physicists and electrical engineers, the popularity of the prolates has recently surged through their appearance in certain communication technologies. In this talk we outline some developments in the sampling theory of bandlimited signals that employ the prolates, and the construction of bandpass prolate functions.
This is joint work with Joe Lakey (New Mexico State University)
Modern mathematics suffers from subtle but serious logical problems connected with the widespread use of infinite sets and the non-computational aspects of real numbers. The result is an ever-widening gap between the theories of pure mathematics and the computations available to computer scientists.
In this talk we discuss a new approach to mathematics that aims to remove many of the logical difficulties by returning our focus to the all important aspect of the rational numbers and polynomial arithmetic. The key is rational trigonometry, which shows how to rethink the fundamentals of trigonometry and metrical geometry in a purely algebraic way, opens the door to more general non-Euclidean geometries, and has numerous concrete applications for computer scientists interested in graphics and robotics.
I will survey what is known and some of the open questions.
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html
I will survey what is known and some of the open questions.
We discuss some recently discovered relations between L-values of modular forms and integrals involving the complete elliptic integral K. Gentle and illustrative examples will be given. Such relations also lead to closed forms of previously intractable integrals and (chemical) lattice sums.
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html.
I will survey what is known and some of the open questions.
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html
I will survey what is known and some of the open questions.
Reaction-diffusion processes occur in many materials with microstructure such as biological cells, steel or concrete. The main difficulty in modelling and simulating accurately such processes is to account for the fine microstructure of the material. One method of upscaling multi-scale problems, which has proven reliable for obtaining feasible macroscopic models, is the method of periodic homogenisation.
The talk will give an introduction to multi-scale modelling of chemical mechanisms in domains with microstructure as well as to the method of periodic homogenisation. Moreover, a few aspects of solving the resulting systems of equations numerically will also be discussed.
I am grateful to have been appointed in a role with a particular focus on First Year Teaching as well as a research mandate. The prospect of trying to do both well is daunting but exciting. I have begun talking with some of my colleagues who are in somewhat similar roles in other Universities in Australia and overseas about what they do. I would like to share what I've learnt, as well as some of my thoughts so far about how this new role might evolve. I am also very interested in input from the Maths discipline or indeed any of my colleagues as to what you think is important and how this role can benefit the maths discipline and our school.
The desire to understand $\pi$, the challenge, and originally the need, to calculate ever more accurate values of $\pi$, the ratio of the circumference of a circle to its diameter, has captured mathematicians - great and less great - for many many centuries. And, especially recently, $\pi$ has provided compelling examples of computational mathematics. $\pi$, uniquely in mathematics, is pervasive in popular culture and the popular imagination. In this lecture I shall intersperse a largely chronological account of $\pi$'s mathematical and numerical status with examples of its ubiquity. It is truly a number for Planet Earth.
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html.
We prove that if $q\ne0,\pm1$ and $\ell\ge1$ are fixed integers, then the numbers $$ 1, \quad \sum_{n=1}^\infty\frac{1}{q^n-1}, \quad \sum_{n=1}^\infty\frac{1}{q^{n^2}-1}, \quad \dots, \quad \sum_{n=1}^\infty\frac{1}{q^{n^\ell}-1} $$ are linearly independent over $\mathbb{Q}$. This generalizes a result of Erdős who treated the case $\ell=1$. The method is based on the original approaches of Chowla and Erdős, together with some results about primes in arithmetic progressions with large moduli of Ahlford, Granville and Pomerance.
This is joint work with Yohei Tachiya.
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html
In 1997, Kaneko introduced the poly-Bernoulli number. Poly-Euler numbers are introduced as a generalization of the Euler numbers in a manner similar to the introduction of the poly-Bernoulli numbers. In my talk, some properties of poly-Euler numbers, for example, explicit formulas, sign change, Clausen-von Staudt type formula, combinatorial interpretations and so on are showed.
This research is a joint work with Yasuo Ohno.
The joint spectral radius of a finite set of real $d \times d$ matrices is defined to be the maximum possible exponential rate of growth of long products of matrices drawn from that set. A set of matrices is said to have the finiteness property if there exists a periodic product which achieves this maximal rate of growth. J. C. Lagarias and Y. Wang conjectured in 1995 that every finite set of real $d \times d$ matrices satisfies the finiteness property. However, T. Bousch and J. Mairesse proved in 2002 that counterexamples to the finiteness conjecture exist, showing in particular that there exists a family of pairs of $2 \times 2$ matrices which contains a counterexample. Similar results were subsequently given by V. D. Blondel, J. Theys and A. A. Vladimirov and by V. S. Kozyakin, but no explicit counterexample to the finiteness conjecture was given. This talk will discuss an explicit counter-example to this conjecture.
After Gromov's work in the 1980s, the modern approach to studying infinite groups is from the geometric point of view, seeing them as metric spaces and using geometric concepts. One of these is the concept of distortion of a subgroup in a group. Here we will give the definition and some examples of distorted and nondistorted subgroups and some recent results on them. The main tools used to establish these results are quasi-metrics or metric estimates, which are quantities which differ from the distance by a multiplicative constant, but which still capture the concept enough to understand distortion.
Three ideas --- active sets, steepest descent, and smooth approximations of functions --- permeate nonsmooth optimization. I will give a fresh perspective on these concepts, and illustrate how many results in these areas can be strengthened in the semi-algebraic setting. This is joint work with A.D. Ioffe (Technion), A.S. Lewis (Cornell), and M. Larsson (EPFL).
Let $s_q(n)$ be the sum of the $q$-ary digits of $n$. For example $s_{10}(1729) = 1 + 7 + 2 + 9 = 19$. It is known what $s_q(n)$ looks like "on average". It can be shown that $s_q(n^h)$ looks $h$ times bigger "on average". This raises the question: is the ratio of these two things $h$ on average? In this talk we will give some history on the sum of digits function, and will give a proof of one of Stolarsky's conjecture concerning the minimal values of the ratio of $s_q(n)$ and $s_q(n^h)$.
Fundamental questions in basic and applied ecology alike involve complex adaptive systems, in which localized interactions among individual agents give rise to emergent patterns that feed back to affect individual behavior. In such systems, a central challenge is to scale from the "microscopic" to the "macroscopic", in order to understand the emergence of collective phenomena, the potential for critical transitions, and the ecological and evolutionary conflicts between levels of organization. This lecture will explore some specific examples, from universality in bacterial pattern formation to collective motion and collective decision-making in animal groups. It also will suggest that studies of emergence, scaling and critical transitions in physical systems can inform the analysis of similar phenomena in ecological systems, while raising new challenges for theory.
Professor Levin received his B.A. from Johns Hopkins University and his Ph.D. in mathematics from the University of Maryland. At Cornell University 1965-1992 , he was Chair of the Section of Ecology and Systematics, and then Director of the Ecosystems Research Center, the Center for Environmental Research and the Program on Theoretical and Computational Biology, as well as Charles A. Alexander Professor of Biological Sciences (1985-1992). Since 1992, he has been at Princeton University, where he is currently George M. Moffett Professor of Biology and Director of the Center for BioComplexity. He retains an Adjunct Professorship at Cornell.
His research interests are in understanding how macroscopic patterns and processes are maintained at the level of ecosystems and the biosphere, in terms of ecological and evolutionary mechanisms that operate primarily at the level of organisms; in infectious diseases; and in the interface between basic and applied ecology.
Simon Levin visits Australia for the first in the Maths of Planet Earth Simons Public Lecture Series. http://mathsofplanetearth.org.au/events/simons/
Automaton semigroups are a natural generalisation of the automaton groups introduced by Grigorchuk and others in the 1980s as examples of groups having various 'exotic' properties. In this talk I will give a brief introduction to automaton semigroups, and then discuss recent joint work with Alan Cain on the extent to which the class of automaton semigroups is closed under certain semigroup constructions (free products and wreath products).
Many problems in diverse areas of mathematics and modern physical sciences can be formulated as a Convex Feasibility Problem, consisting of finding a point in the intersection of finitely many closed convex sets. Two other related problems are the Split Feasibility Problem and the Multiple-Sets Split Feasibility Problem, both very useful when solving inverse problems where constraints are imposed in the domain as well as in the range of a linear operator. We present some recent contributions concerning these problems in the setting of Hilbert spaces along with some numerical experiments to illustrate the implementation of some iterative methods in signal processing.
Motivated by laboratory studies on the distribution of brain synapses, the classical theory of box integrals - being expectations on unit hypercubes - is extended to a new class of fractal "string-generated Cantor sets" that facilitate fine-tuning of their fractal dimension through a suitable choice of generating string. Closed forms for certain statistical moments on these fractal sets will be presented, together with a precision algorithm for higher embedding dimensions. This is based on joint work with Laur. Prof. Jon Borwein, Prof. David Bailey and Dr. Richard Crandall.
Parameterised approximation is a relatively new but growing field of interest. It merges two ways of dealing with NP-hard optimisation problems, namely polynomial approximation and exact parameterised (exponential-time) algorithms.
We explore opportunities for parameterising constant factor approximation algorithms for vertex cover, and we provide a simple algorithm that works on any approximation ratio of the form $\frac{2l+1}{l+1}$, $l=1,2,\dots$, and has complexity that outperforms previously published algorithms by Bourgeois et al. based on sophisticated exact parameterised algorithms. In particular, for $l=1$ (factor-$1.5$ approximation) our algorithm runs in time $\text{O}^*(\text{simpleonefiveapproxbase}^k)$, where parameter $k \leq \frac{2}{3}\tau$, and $\tau$ is the size of a minimum vertex cover.
Additionally, we present an improved polynomial-time approximation algorithm for graphs of average degree at most four and a limited number of vertices with degree less than two.
This is the second part of the informal seminar on an introduction to symbolic convex analysis. The published paper on which this seminar is mainly based on can be found at http://www.carma.newcastle.edu.au/jon/fenchel.pdf.
Nonexpansive operators in Banach spaces are of utmost importance in Nonlinear Analysis and Optimization Theory. We are concerned in this talk with classes of operators which are, in some sense, nonexpansive not with respect to the norm, but with respect to Bregman distances. Since these distances are not symmetric in general, it seems natural to distinguish between left and right Bregman nonexpansive operators. Some left classes have already been studied quite intensively, so this talk is mainly devoted to right Bregman nonexpansive operators and the relationship between both classes.
This talk is based on joint works with Prof. Simeon Reich and Shoham Sabach from Technion-Israel Institute of Technology, Haifa.
Multi-linear functions appear in many global optimization problems, including reformulated quadratic and polynomial optimization problems. There is a extended formulation for the convex hull of the graph of a multi-linear function that requires the use of an exponential number of variables. Relying on this result, we study an approach that generates relaxations for multiple terms simultaneously, as opposed to methods that relax the nonconvexity of each term individually. In some special cases, we are able to establish analytic bounds on the ratio of the strength of the term-by-term and convex hull relaxations. To our knowledge, these are the first approximation-ratio results for the strength of relaxations of global optimization problems. The results lend insight into the design of practical (non-exponentially sized) relaxations. Computations demonstrate that the bounds obtained in this manner are competitive with the well-known semi-definite programming based bounds for these problems.
Joint work with Jim Luedtke, University of Wisconsin-Madison, and Mahdi Namazifar, now with Opera Solutions.
This talk is an introduction to symbolic convex analysis.
I will discuss a new algorithm for counting points on hyperelliptic curves over finite fields.
In this talk, we present our ongoing efforts in solving a number of continuous facility location problems that involve sets using recently developed tools of variational analysis and generalized differentiation. Subgradients of a class of nonsmooth functions called minimal time functions are developed and employed to study these problems. Our approach advances the applications of variational analysis and optimization to a well-developed field of facility location, while shedding new light on well-known classical geometry problems such as the Fermat-Torricelli problem, the Sylvester smallest enclosing circle problem, and the problem of Apollonius.
Automata groups are a class of groups generated by recursively defined automorphisms of a regular rooted tree. Associated to each automata group is an object known as the self-similarity graph. Nekrashevych showed that in the case where the group satisfies a natural condition known as contracting, the self-similarity graph is Gromov-hyperbolic and has boundary homeomorphic to the limit space of the group action. I will talk about self-similarity graphs of automata groups that do not satisfy the contracting condition.
Giuga's conjecture will be introduced, and we will discuss what's changed in the computation of a counterexample in the last 17 years.
Infecting aedes aegypti with Wolbachia has been proposed as an alternative in reducing dengue transmission. If Wolbachia-infected mosquitoes can invade and dominate the population of aedes aegypti mosquitoes, they can reduce dengue transmission. Cytoplasmic Incompatibility (CI) provides the reproductive advantage for Wolbachia-infected mosquitoes with which they can reproduce more and dominate the population. A mosquito population model is developed in order to determine the survival of Wolbachia-infected mosquiotes when they are released into the wild. The model has two physically stable realistic steady states. The model reveals that once the Wolbachia-infected mosquitoes survive, they ultimately dominate the population.
We study the problem of finding an interpolating curve passing through prescribed points in the Euclidean space. The interpolating curve minimizes the pointwise maximum length, i.e., L∞-norm, of its acceleration. We re-formulate the problem as an optimal control problem and employ simple but effective tools of optimal control theory. We characterize solutions associated with singular (of infinite order) and nonsingular controls. We reduce the infinite dimensional interpolation problem to an ensuing finite dimensional one and derive closed form expressions for interpolating curves. Consequently we devise numerical techniques for finding interpolating curves and illustrate these techniques on examples.
I will give an extended version of my talk at the AustMS meeting about some ongoing work with Pierre-Emmanuel Caprace and George Willis.
Given a locally compact topological group G, the connected component of the identity is a closed normal subgroup G_0 and the quotient group is totally disconnected. Connected locally compact groups can be approximated by Lie groups, and as such are relatively well-understood. By contrast, totally disconnected locally compact (t.d.l.c.) groups are a more difficult class of objects to understand. Unlike in the connected case, it is probably hopeless to classify the simple t.d.l.c. groups, because this would include for instance all simple groups (equipped with the discrete topology). Even classifying the finitely generated simple groups is widely regarded as impossible. However, we can prove some general results about broad classes of (topologically) simple t.d.l.c. groups that have a compact generating set.
Given a non-discrete t.d.l.c. group, there is always an open compact subgroup. Compact totally disconnected groups are residually finite, so have many normal subgroups. Our approach is to analyse a t.d.l.c. group G (which may itself be simple) via normal subgroups of open compact subgroups. From these we obtain lattices and Cantor sets on which G acts, and we can use properties of these actions to demonstrate properties of G. For instance, we have made some progress on the question of whether a compactly generated topologically simple t.d.l.c. group is abstractly simple, and found some necessary conditions for G to be amenable.
We discuss how the title is related to π.
In 1966 Gallai conjectured that a connected graph of order n can be decomposed into n/2 or fewer paths when n is even, or (n+1)/2 or fewer paths when n is odd. We shall discuss old and new work on this as yet unsolved conjecture.
Many cognitive models derive their predictions through simulation. This means that it is difficult or impossible to write down a probability distribution or likelihood that characterizes the random behavior of the data as a function of the model's parameters. In turn, the lack of a likelihood means that standard Bayesian analyses of such models are impossible. In this presentation we demonstrate a procedure called approximate Bayesian computation (ABC), a method for Bayesian analysis that circumvents the evaluation of the likelihood. Although they have shown great promise for likelihood-free inference, current ABC methods suffer from two problems that have largely revented their mainstream adoption: long computation time and an inability to scale beyond models with few parameters. We introduce a new ABC algorithm, called ABCDE, that includes differential evolution as a computationally efficient genetic algorithm for proposal generation. ABCDE is able to obtain accurate posterior estimates an order of magnitude faster than a popular rejection-based method and scale to high-dimensional parameter spaces that have proven difficult for the current rejection-based ABC methods. To illustrate its utility we apply ABCDE to several well-established simulation-based models of memory and decision-making that have never been fit in a Bayesian framework.
AUTHORS: Brandon M. Turner (Stanford University) Per B. Sederberg (The Ohio State University)
Motivated by the desire to visualise large mathematical data sets, especially in number theory, we offer various tools for representing floating point numbers as planar walks and for quantitatively measuring their “randomness”.
What to expect: some interesting ideas, many beautiful pictures (including a 108-gigapixel picture of π), and some easy-to-understand maths.
What you won’t get: too many equations, difficult proofs, or any “real walking”.
This is a joint work with David Bailey, Jon Borwein and Peter Borwein.
In 1966 Gallai conjectured that a connected graph of order n can be decomposed into n/2 or fewer paths when n is even, or (n+1)/2 or fewer paths when n is odd. We shall discuss old and new work on this as yet unsolved conjecture.
In this talk, we will show that a D-finite Mahler function is necessarily rational. This gives a new proof of the rational-transcendental dichotomy of Mahler functions due to Nishioka. Using our method of proof, we also provide a new proof of a Pólya-Carlson type result for Mahler functions due to Randé; that is, a Mahler function which is meromorphic in the unit disk is either rational or has the unit circle as a natural boundary. This is joint work with Jason Bell and Eric Rowland.
If some arithmetical sums are small then the complex zeroes of the zeta-function are linearly dependent. Since we don't believe the conclusion we ought not to believe the premise. I will show that the zeroes are 'almost linearly independent' which implies, in particular, that the Mertens conjecture fails more drastically than was previously known.
In this talk projection algorithms for solving (nonconvex) feasibility problems in Euclidian spaces are considered. Of special interest are the Method of Alternating Projections (MAP) and the Averaged Alternating Reflection Algorithm (AAR) which cover some of the state of the art algorithms for our intended application, the phase retrieval problem. In the case of convex feasibility, firm nonexpansiveness of projection mappings is a global property that yields global convergence of MAP, and, for consistent problems, AAR. Based on epsilon-delta-regularity of sets (Bauschke, Luke, Phan, Wang 2012) a relaxed local version of firm nonexpansiveness with respect to the intersection is introduced for consistent feasibility problems. This combined with a type of coercivity condition, which relates to the regularity of the intersection, yields local linear convergence of MAP for a wide class of nonconvex problems, and even local linear convergence of AAR in more limited nonconvex settings.
In this talk, we study the properties of integral functionals induced on the Banach space of integrable functions by closed convex functions on a Euclidean space.
We give sufficient conditions for such integral functions to be strongly rotund (well-posed). We show that in this generality functions such as the Boltzmann-Shannon entropy and the Fermi-Dirac entropy are strongly rotund. We also study convergence in measure and give various limiting counter-example.
We consider the problem of characterising embeddings of an abstract group into totally disconnected locally compact (tdlc) groups. Specifically, for each pair of nonzero integers $m,n$ we construct a tdlc group containing the Baumslag-Solitar group $BS(m,n)$ as a dense subgroup, and compute the scales of elements and flat rank of the tdlc group.
This is joint work with George Willis.
Linear Water Wave theory is one of the most important branches on fluid mechanics. Practically it underpins most of the engineering design of ships, offshore structures, etc. It also has a very rich history in the development of applied mathematics. In this talk I will focus on the connection between solutions in the frequency and time-domains and show how we can use various formulations to make numerical calculations and to construct approximate solutions. I will illustrate these methods with application to some simple wave scattering problems.
I will discuss four much abused words Interdisciplinarity, Innovation, Collaboration and Creativity. I will describe what they mean for different stakeholder groups and will speak about my own experiences as a research scientist, as a scientific administrator, as an educator and even as a small high-tech businessman. I will also offer advice that can of course be ignored.
George continues his series of talks on the embedding of Higman-Thompson groups into the groups of almost automorphisms of trees, structure theorems and scale calculations for these examples.
In this talk, we study the properties of integral functionals induced on $L_\text{E}^1(S,\mu)$ by closed convex functions on a Euclidean space E. We give sufficient conditions for such integral functions to be strongly rotund (well-posed). We show that in this generality functions such as the Boltzmann-Shannon entropy and the Fermi-Dirac entropy are strongly rotund. We also study convergence in measure and give various limiting counter-example.
This is joint work with Jon Borwein.
Recently the Alternating Projection Algorithm was extended into CAT(0) spaces. We will look at this and also current work on extending the Douglas Rachford Algorithm into CAT(0) spaces. By using CAT(0) spaces the underlying linear structure of the space is dispensable and this allows certain algorithms to be extended to spaces such as classical hyperbolic spaces, simply connected Riemannian manifolds of non-positive curvature, R-trees and Euclidean buildings.
This week Brian Alspach will complete the discussion on Burnside's Theorem and vertex-transitive graphs of prime order.
George continues his series of talks on the embedding of Higman-Thompson groups into the groups of almost automorphisms of trees, structure theorems and scale calculations for these examples.
A frequent theme of 21st century experimental math is the computer discovery of identities, typically done by means of computing some mathematical entity (a sum, limit, integral, etc) to very high numeric precision, then using the PSLQ algorithm to identify the entity in terms of well known constants.
Perhaps the most successful application of this methodology has been to identify integrals arising in mathematical physics. This talk will present numerous examples of this type, including integrals from quantum field theory, Ising theory, random walks, 3D lattice problems, and even mouse brains. In some cases, it is necessary to compute these integrals to 3000-digit precision, and developing techniques to do such computations is a daunting technical challenge.
Given a positive integer b, we say that a mathematical constant alpha is "b-normal" or "normal base b" if every m-long string of digits appears in the base-b expansion of alpha with precisely the limiting frequency 1/b^m. Although it is well known from measure theory that almost all real numbers are b-normal for all integers b > 1, nonetheless proving normality (or nonnormality) for specific constants, such as pi, e and log(2), has been very difficult.
In the 21st century, a number of different approaches have been attempted on this problem. For example, a recent study employed a Poisson model of normality to conclude that based on the first four trillion hexadecimal digits of pi, it is exceedingly unlikely that pi is not normal. In a similar vein, graphical techniques, in most cases based on the digit-generated "random" walks, have been successfully employed to detect certain nonnormality in some cases.
On the analytical front, it was shown in 2001 that the normality of certain reals, including log(2) and pi (or any other constant given by a BBP formula), could be reduced to a question about the behavior of certain specific pseudorandom number generators. Subsequently normality was established for an uncountable class of reals (the "Stoneham numbers"), the simplest of which is: alpha_{2,3} = Sum_{n >= 0} 1/(3^n 2^(3^n)), which is provably normal base 2. Just as intriguing is a recent result that alpha_{2,3}, for instance, is provably NOT normal base 6. These results have now been generalized to some extent, although many open cases remain.
In this talk I will present an introduction to the theory of normal numbers, including brief mention of new graphical- and statistical-based techniques. I will then sketch a proof of the normality base 2 (and nonnormality base 6) of Stoneham numbers, then suggest some additional lines of research. Various parts of this research were conducted in collaboration with Richard Crandall, Jonathan and Peter Borwein, Francisco Aragon, Cristian Calude, Michael Dinneen, Monica Dumitrescu and Alex Yee.
We shall continue exploring implications of Burnside's Theorem for vertex-transitive graphs.
George is going to continue his series of talks on the embedding of Higman-Thompson groups into the groups of almost automorphisms of trees, and over several weeks look at the structure theorems and scale calculations for these examples.
Variational methods have been used to derive symmetric solutions for many problems related to real world applications. To name a few we mention periodic solutions to ODEs related to N-body problems and electrical circuits, symmetric solutions to PDEs, and symmetry in derivatives of spectral functions. In this talk we examine the commonalities of using variational methods in the presence of symmetry.
This is an ongoing collaborative research project with Jon Borwein. So far our questions still outnumber our answers.
Groundwater makes up nearly 30% of the entire world’s freshwater but the mathematical models for the better understanding of the system are difficult to validate due to the disordered nature of the porous media and the complex geometry of the channels of flow. In this seminar, after establishing the statistical macroscopic equivalent of the Navier-Stokes equations for the groundwater hydrodynamic and its consequences in term of Laplace and diffusion equations, some cases will be solved in term of special functions by using the modern Computer Algebra System.
We shall continue exploring implications of Burnside's Theorem for vertex-transitive graphs.
George is going to start giving some talks on the embedding of Higman-Thompson groups into the groups of almost automorphisms of trees, and over several weeks will look at the structure theorems and scale calculations for these examples.
We are holding an afternoon mini-conference, in conjunction with the School of Mathematical and Physical Sciences.
If you are engaged in any of the many Outreach Activities in the Mathematical Sciences that people from CARMA, our School and beyond contribute to, for example visiting primary or secondary schools, presenting to schools who visit us, public lectures, media interviews, helping run maths competitions, etc etc, and would like to share what you're doing, please let us know. Also if you're not currently engaged in an outreach activity but have an idea that you would like to try, and want to use a talk about your idea as a "sounding board", please feel free to do so.
There will be some very short talks: 5 minutes, and some longer talks: 20 minutes, with time for discussion in between. We'll be serving afternoon tea throughout the afternoon; and will have an open discussion forum near the end of the day. If you're interested in giving a talk please contact Judy-anne.Osborn@newcastle.edu.au, indicating whether you'd prefer a 5-minute or a 20-minute slot. If you're simply interested in attending, please let us know as well for catering purposes. The event will be held in one of the function rooms in the Shortland building.
12:05 — Begin, with welcome and lunch
15:45 — Last talk finishes
15:45-16:15 — Open discussion
Burnside's Theorem characterising transitive permutation groups of prime degree has some wonderful applications for graphs. This week we start an exploration of this topic.
(Joint speakers, Jon Borwein and Michael Rose)
p>Using fractal self-similarity and functional-expectation relations, the classical theory of box integrals is extended to encompass a new class of fractal “string-generated Cantor sets” (SCSs) embedded in unit hypercubes of arbitrary dimension. Motivated by laboratory studies on the distribution of brain synapses, these SCSs were designed for dimensional freedom: a suitable choice of generating string allows for fine-tuning the fractal dimension of the corresponding set. We also establish closed forms for certain statistical moments on SCSs and report various numerical results. The associated paper is at http://www.carma.newcastle.edu.au/jon/papers.html#PAPERS.Let $F(z)$ be a power series, say with integer coefficients. In the late 1920s and early 1930s, Kurt Mahler discovered that for $F(z)$ satisfying a certain type of functional equation (now called Mahler functions), the transcendence of the function $F(z)$ could be used to prove the transcendence of certain special values of $F(z)$. Mahler's main application at the time was to prove the transcendence of the Thue-Morse number $\sum_{n\geq 0}t(n)/2^n$ where $t(n)$ is either 0 or 1 depending on the parity of the number of 1s in the base 2 expansion of $n$. In this talk, I will talk about some of the connections between Mahler functions and finite automata and highlight some recent approaches to large problems in the area. If time permits, I will outline a new proof of a version of Carlson's theorem for Mahler functions; that is, a Mahler function is either rational or it has the unit circle as a natural boundary.
Snarks are 3-regular graphs that are not 3-edge-colourable and are cyclically 4-edge-connected. They exist but are hard to find. On the other hand, it is believed that Cayley graphs can never be snarks. The latter is the subject of the next series of talks.
Snarks are 3-regular graphs that are not 3-edge-colourable and are cyclically 4-edge-connected. They exist but are hard to find. On the other hand, it is believed that Cayley graphs can never be snarks. The latter is the subject of the next series of talks.
Hajek proved that a WUR Banach space is an Asplund space. This result suggests that the WUR property might have interesting consequences as a dual property. We show that
(i) every Banach Space with separable second dual can be equivalently renormed to have WUR dual,
(ii) under certain embedding conditions a Banach space with WUR dual is reflexive.
Snarks are 3-regular graphs that are not 3-edge-colourable and are cyclically 4-edge-connected. They exist but are hard to find. On the other hand, it is believed that Cayley graphs can never be snarks. The latter is the subject of the next series of talks.
We consider the bipartite version of the degree/diameter problem; namely, find the maximum number Nb(d,D) of vertices in a bipartite graph of maximum degree d>2 and diameter D>2. The actual value of Nb(d,D) is still unknown for most (d,D) pairs.
The well-known Moore bound Mb(d,D) gives a general upper bound for Nb(d,D); graphs attaining this bound are called Moore (bipartite) graphs. Moore bipartite graphs are very scarce; they may only exist for D=3,4 or 6, but no other diameters. Interest has then shifted to investigate the existence or otherwise of graphs missing the Moore bound by a few vertices. A graph with order Mb(d,D)-e is called a graph of defect e.
It has been proved that bipartite graphs of defect 2 do not exist when D>3. In our paper we 'almost' prove that bipartite graphs of defect 4 cannot exist when D>4, thereby establishing a new upper bound on Nb(d,D) for more than 2/3 of all (d,D) combinations.
Dr Koerber will speak about the experience of using MapleTA extensively in undergraduate teaching at the University of Adelaide, and demonstrate how they have been using the system there. Bio: Adrian Koerber is Director of First Year Studies in Mathematics at the University of Adelaide. His mathematical research is in the area of modelling gene networks.
We present a nonconvex bundle technique where function and subgradient values are available only up to an error tolerance which remains unknown to the user. The challenge is to develop an algorithm which converges to an approximate solution which, despite the lack of information, is as good as one can hope for. For instance, if data are known up to the error $O(\epsilon)$, the solution should also be accurate up to $O(\epsilon)$. We show that the oracle of downshifted tangents is an excellent tool to deal with this difficult situation.
We consider the bipartite version of the degree/diameter problem; namely, find the maximum number Nb(d,D) of vertices in a bipartite graph of maximum degree d>2 and diameter D>2. The actual value of Nb(d,D) is still unknown for most (d,D) pairs.
The well-known Moore bound Mb(d,D) gives a general upper bound for Nb(d,D); graphs attaining this bound are called Moore (bipartite) graphs. Moore bipartite graphs are very scarce; they may only exist for D=3,4 or 6, but no other diameters. Interest has then shifted to investigate the existence or otherwise of graphs missing the Moore bound by a few vertices. A graph with order Mb(d,D)-e is called a graph of defect e.
It has been proved that bipartite graphs of defect 2 do not exist when D>3. In our paper we 'almost' prove that bipartite graphs of defect 4 cannot exist when D>4, thereby establishing a new upper bound on Nb(d,D) for more than 2/3 of all (d,D) combinations.
Motivated by questions of algorithm analysis, we provide several distinct approaches to determining convergence and limit values for a class of linear iterations.
This is joint work with D. Borwein and B. Sims.
A body moves in a rarefied medium of resting particles and at the same time very slowly rotates (somersaults). Each particle of the medium is reflected elastically when hitting the body boundary (multiple reflections are possible). The resulting resistance force acting on the body depends on the time; we are interested in minimizing the time-averaged value of resistance (which is called $R$). The value $R(B)$ is well defined in terms of billiard in the complement of $B$, for any bounded body $B \subset \mathbb{R}^d$, $d\geq 2$ with piecewise smooth boundary.
Let $C\subset\mathbb{R}^d$ be a bounded convex body and $C_1\subset C$ be another convex body with $\partial C_1 \cap C=\varnothing$. It would be interesting to get an estimate for $$R(C1_,C)= \inf_{C_1\subset B \subset C} R(B) .................. (1)$$ If $\partial C_1$ is close to $\partial C$, problem (1) can be referred to as minimizing the resistance of the convex body $C$ by "roughening" its surface. We cannot solve problem (1); however we can find the limit $$\lim_{\text{dist}(\partial C_1,\partial C)\rightarrow 0} \frac{R(C_1,C)}{R(C)}. .................. (2) $$
It will be explained that problem (2) can be solved by reduction to a special problem of optimal mass transportation, where the initial and final measurable spaces are complementary hemispheres, $X=\{x=(x_1,...,x_d)\in S^{d-1}: x_1\geq 0\}$ and $Y=\{x\in S^{d-1}:x_1\leq 0\}$. The transportation cost is the squared distance, $c(x,y)=\frac{1}{2}|x-y|^2$, and the measures in $X$ and $Y$ are obtained from the $(d-1)$-dimensional Lebesgue measure on the equatorial circle $\{x=(x_1,...,x_d):|x|\leq 1,x_1=0\}$ by parallel translation along the vector $e_1=(1,0,...,0)$. Let $C(\nu)$ be the total cost corresponding to the transport plan $\nu$ and let $\nu_0$ be the transport plan generated by parallel translation along $e_1$; then the value $\frac{\inf C(\nu)}{C(\nu_0)}$ coincides with the limit in (2).
Surprisingly, this limit does not depend on the body $C$ and depends only on the dimension $d$.
In particular, if $d=3$ ($d=2$), it equals (approximately) 0.96945 (0.98782). In other words, the resistance of a 3-dimensional (2-dimensional) convex body can be decreased by 3.05% (correspondingly, 1.22%) at most by roughening its surface.
The Douglas-Rachford algorithm is an iterative method for finding a point in the intersection of two (or more) closed sets. It is well-known that the iteration (weakly) converges when it is applied to convex subsets of a Hilbert space. Despite the absence of a theoretical justification, the algorithm has also been successfully applied to various non-convex practical problems, including finding solutions for the eight queens problem, or sudoku puzzles. In particular, we will show how these two problems can be easily modelled.
With the aim providing some theoretical explanation of the convergence in the non-convex case, we have established a region of convergence for the prototypical non-convex Douglas-Rachford iteration which finds a point on the intersection of a line and a circle. Previous work was only able to establish local convergence, and was ineffective in that no explicit region of convergence could be given.
PS: Bring your hardest sudoku puzzle :)
Based on generalized backward shift operators we introduce adaptive Fourier decomposition. Then we discuss its relations and applications to (i) system identification; (2) computation of Hilbert transform; (3) algorithm for the best order-n rational approximation to functions in the Hardy space H2; (4) forward and backward shift invariant spaces; (5) band preserving in filter designing; (6) phase retrieving; and (7) the Bedrosian identity. The talk also concerns possible generalizations of the theory and applications to higher dimensional spaces.
I will give a brief introduction to the theory of self-similar groups, focusing on a couple of pertinent examples: Grigorchuk's group of intermediate growth, and the basilica group.
This week the speaker in the Discrete Mathematics Instructional Seminar is Judy-anne Osborn who will be discussing Hadamard matrices.
TBA
The double zeta values are one natural way to generalise the Riemann zeta function at the positive integers; they are defined by $\zeta(a,b) = \sum_{n=1}^\infty \sum_{m=1}^{n-1} 1/n^a/m^b$. We give a unified and completely elementary method to prove several sum formulae for the double zeta values. We also discuss an experimental method for discovering such formulae.
Moreover, we use a reflection formula and recursions involving the Riemann zeta function to obtain new relations of closely related functions, such as the Witten zeta function, alternating double zeta values, and more generally, character sums.
There is a high prevalence of tuberculosis (TB) in Papua New Guinea (PNG), which is exacerbated by the presence of drug-resistant TB strains and HIV infection. This is an important public health issue not only locally within PNG, but also in Australia due to the high cross-border traffic in the Torres Strait Island–Western Province (PNG) treaty region. A metapopulation model is used to evaluate the effect of varying control strategies in the region, and some initial cost-benefit analysis figures are presented.
This week the speaker in the Discrete Mathematics Instructional Seminar is Judy-anne Osborn who will be discussing Hadamard matrices.
A graph on v vertices is called pancyclic if it contains cycles of every length from 3 to v. Obviously such graphs exist — the complete graph on v vertices is an example. We shall look at the question, what is the minimum number of edges in a pancyclic graph? Interestingly, this question was "solved", incorrectly, in 1978. A complete solution is not yet known.
This week Brian Alspach concludes his series of talks entitled "The Anatomy Of A Famous Conjecture." We shall be in V27 - note room change.
This involves (in pre-nonstandard analysis times) the development of a simple system of infinites and infinitesmals that help to clarify Cantor's Ternary Set, nonmeasurable sets and Lebesgue integration. The talk will include other memories as a maths student at Newcastle University College, Tighes Hill, from 1959 to 1961.
Brian Alspach will continue his discussion "The Anatomy Of A Famous Conjecture."
This talk will survey some of the classical and recent results concerning operators composed of a projection onto a compact set in time, followed by a projection onto a compact set in frequency. Such "time- and band-limiting" operators were studied by Landau, Slepian, and Pollak in a series of papers published in the Bell Systems Tech. Journal in the early 1960s identifying the eigenfunctions, providing eigenvalue estimates, and describing spaces of "essentially time- and band-limited signals."
Further progress on time- and band-limiting has been intermittent, but genuine recent progress has been made in terms of numerical analysis, sampling theory, and extensions to multiband signals, all driven to some extent by potential applications in communications. After providing an outline of the historical developments in the mathematical theory of time- and bandlimiting, some details of the sampling theory and multiband setting will be given. Part of the latter represents joint work with Jeff Hogan and Scott Izu.
Brian Alspach will continue his discussion "The Anatomy Of A Famous Conjecture."
This talk will discuss opportunities and challenges related to the development and application of operations research techniques to transportation and logistics problems in non-profit settings. Much research has been conducted on transportation and logistics problems in commercial settings where the goal is either to maximize profit or to minimize cost. Significantly less work has been conducted for non-profit applications. In such settings, the objectives are often more difficult to quantify since issues such as equity and sustainability must be considered, yet efficient operations are still crucial. This talk will present several research projects that introduce new approaches tailored to the objectives and constraints unique to non-profit agencies, which are often concerned with obtaining equitable solutions given limited, and often uncertain, budgets, rather than with maximizing profits.
This talk will assess the potential of operations research to address the problems faced by non-profit agencies and attempt to understand why these problems have been understudied within the operations research community. To do so, we will ask the following questions: Are non-profit operations problems rich enough for academic study? and Are solutions to non-profit operations problems applicable to real communities?
Brian Alspach will continue his discussion "The Anatomy Of A Famous Conjecture."
We consider some fundamental generalized Mordell-Tornheim-Witten (MTW) zeta-function values along with their derivatives, and explore connections with multiple-zeta values (MZVs). To achieve these results, we make use of symbolic integration, high precision numerical integration, and some interesting combinatorics and special-function theory.
Our original motivation was to represent previously unresolved constructs such as Eulerian log-gamma integrals. Indeed, we are able to show that all such integrals belong to a vector space over an MTW basis, and we also present, for a substantial subset of this class, explicit closed-form expressions. In the process, we significantly extend methods for high-precision numerical computation of polylogarithms and their derivatives with respect to order. That said, the focus of our paper is the relation between MTW sums and classical polylogarithms. It is the adumbration of these relationships that makes the study significant.
The associated paper (with DH Bailey and RE Crandall) is at http://carmasite.newcastle.edu.au/jon/MTW1.pdf.
Approximation theory is a classical part of the analysis of functions defined on an Euclidean space or its subset and the foundation of its applications, while the problems related to high or infinite dimensions create known challenges even in the setting of Hilbert spaces. The stability (uniform continuity) of a mapping is one of the traditional properties investigated in various branches of pure and applied mathematics and further applications in engineering. Examples include analysis of linear and non-linear PDEs, (short-term) prediction problems and decision-making and data evolution.
We describe the uniform approximation properties of the uniformly continuous mappings between the pairs of Banach and, occasionally, metric spaces from various wide parameterised and non-parameterised classes of spaces with or without the local unconditional structure in a quantitative manner. The striking difference with the finite-dimensional setting is represented by the presence of Tsar'kov's phenomenon. Many tools in use are developed under the scope of our quasi-Euclidean approach. Its idea seems to be relatively natural in light of the compressed sensing and distortion phenomena.
We consider some fundamental generalized Mordell-Tornheim-Witten (MTW) zeta-function values along with their derivatives, and explore connections with multiple-zeta values (MZVs). To achieve these results, we make use of symbolic integration, high precision numerical integration, and some interesting combinatorics and special-function theory.
Our original motivation was to represent previously unresolved constructs such as Eulerian log-gamma integrals. Indeed, we are able to show that all such integrals belong to a vector space over an MTW basis, and we also present, for a substantial subset of this class, explicit closed-form expressions. In the process, we significantly extend methods for high-precision numerical computation of polylogarithms and their derivatives with respect to order. That said, the focus of our paper is the relation between MTW sums and classical polylogarithms. It is the adumbration of these relationships that makes the study significant.
The associated paper (with DH Bailey and RE Crandall) is at http://carmasite.newcastle.edu.au/jon/MTW1.pdf.
The talk will outline some topics associated with constructions for Hadamard matrices, in particular, a relatively simple construction, given by a sum of Kronecker products of ingredient matrices obeying certain conditions. Consideration of the structure of the ingredient matrices leads, on the one hand, to consideration of division algebras and Clifford algebras, and on the other hand, to searching for multisets of {-1,1} ingredient matrices. Structures within the sets of ingredient matrices can make searching more efficent.
In this talk, we consider a general convex feasibility problem in Hilbert space, and analyze a primal-dual pair of problems generated via a duality theory introduced by Svaiter. We present some algorithms and their convergence properties. The focus is a general primal-dual principle for strong convergence of some classes of algorithms. In particular, we give a different viewpoint for the weak-to-strong principle of Bauschke and Combettes. We also discuss how subgradient and proximal type methods fit in this primal-dual setting.
Joint work with Maicon Marques Alves (Universidade Federal de Santa Catarina-Brazil)
Brian Alspach will continue with "The Anatomy Of A Famous Conjecture" this Thursday. One can easily pick up the thread this week without having attended last week, but if you miss this week it will not be easy to join in next week.
12:00-1:00 | Michael Coons (University of Waterloo) |
1:00-2:00 | Claus Koestler (Aberystwyth University) |
2:00-3:00 | Eric Mortenson (The University of Queensland) |
3:00-4:00 | Ekaterina Shemyakova (University of Western Ontario) |
Exceptional Lie group $G_2$ is a beautiful 14-dimensional continuous group, having relations with such diverse notions as triality, 7-dimensional cross product and exceptional holonomy. It was found abstractly by Killing in 1887 (complex case) and then realized as a symmetry group by Engel and Cartan in 1894 (real split case). Later in 1910 Cartan returned to the topic and realized split $G_2$ as the maximal finite-dimensional symmetry algebra of a rank 2 distribution in $\mathbb{R}^5$. In other words, Cartan classified all symmetry groups of Monge equations of the form $y'=f(x,y,z,z',z'')$. I will discuss the higher-dimensional generalization of this fact, based on the joint work with Ian Anderson. Compact real form of $G_2$ was realized by Cartan as the automorphism group of octonions in 1914. In the talk I will also explain how to realize this $G_2$ as the maximal symmetry group of a geometric object.
I have embarked on a project of looking for Hamilton paths in Cayley graphs on finite Coxeter groups. This talk is a report on the progress thus far.
Brian Alspach will continue with "The Anatomy of a Famous Conjecture" this Thursday. One can easily pick up the thread this week without having attended last week, but if you miss this week it will not be easy to join in next week.
In this talk, we consider the structure of maximally monotone operators in Banach space whose domains have nonempty interior and we present new and explicit structure formulas for such operators. Along the way, we provide new proofs of the norm-to-weakstar closedness and property (Q) of these operators (recently established by Voisei). Various applications and limiting examples are given. This is the joint work with Jon Borwein.
This will be an introductory talk which begins by describing the four colour theorem and finite projective planes in the setting of graph decompositions. A problem posed by Ringel at a graph theory meeting in Oberwolfach in 1967 will then be discussed. This problem is now widely known as the Oberwolfach Problem, and is a generalisation of a question asked by Kirkman in 1850. It concerns decompositions of complete graphs into isomorphic copies of spanning regular graphs of degree two.
In my opinion, the most significant unsolved problem in graph decompositions is the cycle double conjecture. This begins a series of talks on this conjecture in terms of background, relations to other problems and partial results.
Simultaneous Localisation and Mapping (SLAM) has become prominent in the field of robotics over the last decade, particularly in application to autonomous systems. SLAM enables any system equipped with exteroceptive (and often inertial) sensors to simultaneously update its own positional estimate and map of the environment by utilising information collected from the surroundings. The solution to the probabilistic SLAM problem can be derived using Bayes Theorem to yield estimates of the system state and covariance. In recursive form, the basic prediction-correction algorithm employs an Extended Kalman Filter (EKF) with Cholesky decomposition for numerical stability during inversion. This talk will present the mathematical formulation and solution of the SLAM problem, along with some algorithms used in implementation. We will then look at some applications of SLAM in the real world and discuss some of the challenges for future development.
We investigate various properties of the sublevel set $\{x : g(x) \leq 1\}$ and the integration of $h$ on this sublevel set when $g$ and $h$ are positively homogeneous functions. For instance, the latter integral reduces to integrating $h\exp(- g)$ on the whole space $\mathbb{R}^n$ (a non-Gaussian integral) and when $g$ is a polynomial, then the volume of the sublevel set is a convex function of its coefficients.
In fact, whenever $h$ is non-negative, the functional $\int \phi(g)h dx$ is a convex function of $g$ for a large class of functions $\phi:\mathbb{R}_{+} \to \mathbb{R}$. We also provide a numerical approximation scheme to compute the volume or integrate $h$ (or, equivalently, to approximate the associated non-Gaussian integral). We also show that finding the sublevel set $\{x : g(x) \leq 1\}$ of minimum volume that contains some given subset $K$ is a (hard) convex optimization problem for which we also propose two convergent numerical schemes. Finally, we provide a Gaussian-like property of non-Gaussian integrals for homogeneous polynomials that are sums of squares and critical points of a specific function.
In this talk, we consider the automorphism groups of the Cayley graph with respect to the Coxeter generators and the Davis complex of an arbitrary Coxeter group. We determine for which Coxeter groups these automorphism groups are discrete. In the case where they are discrete, we express them as semidirect products of two obvious families of automorphisms. This extends a result of Haglund and Paulin.
In this paper, we construct maximally monotone operators that are not of Gossez's dense-type (D) in many nonreflexive spaces. Many of these operators also fail to possess the Brønsted-Rockafellar (BR) property. Using these operators, we show that the partial inf-convolution of two BC-functions will not always be a BC-function. This provides a negative answer to a challenging question posed by Stephen Simons. Among other consequences, we deduce that every Banach space which contains an isomorphic copy of the James space J or its dual $J^*$, or of $c_0$ or its dual $l^1$ admits a non type (D) operator.
The problem posed by Hilbert in 1900 was resolved in the 1930s independently by A. Gelfond and Th. Schneider. The statement is that $a^b$ is transcendental for algebraic $a \ne 0,1$ and irrational algebraic $b$. The aim of the two 2-hour lectures is to give a proof of this result using the so-called method of interpolation determinants.
The modernization of infrastructure networks requires coordinated planning and control. Considering traffic networks and electricity grids raises similar issues on how to achieve substantial new capabilities of effectiveness and efficiency. For instance, power grids need to integrate renewable energy sources and electric vehicles. It is clear that all this can only be achieved by greater reliance on systematic planning in the presence of uncertainty and sensing, communications, computing and control on an unprecedented scale, these days captured in the term "smart grids". This talk will outline current research on planning future grids and control of smart grids. In particular, the possible roles of network science will be emphasized and the challenges arising.
The Mathematics and Statistics Learning Centre was established at the University of Melbourne over a decade ago, to respond to the needs of, initially, first year students of mathematics and statistics. The role of the centre and its Director has grown. The current Director, Dr Deborah King, will expound upon her role in the Centre.
Symbolic and numeric computation have been distinguished by definition: numeric computation puts numerical values in its variables as soon as possible, symbolic computation as late as possible. Chebfun blurs this distinction, aiming for the speed of numerics with the generality and flexibility of symbolics. What happens when someone who has used both Maple and Matlab for decades, and has thereby absorbed the different fundamental assumptions into a "computational stance", tries to use Chebfun to solve a variety of computational problems? This talk reports on some of the outcomes.
In this talk, we present a numerical method for a class of generalized inequality constrained integer linear programming (GILP) problems that includes the usual mixed-integer linear programming (MILP) problems as special cases. Instead of restricting certain variables to integer values as in MILP, we require in these GILP problems that some of the constraint functions take integer values. We present a tighten-and-branch method that has a number of advantages over the usual branch-and-cut algorithms. This includes the ability of keeping the number of constraints unchanged for all subproblems throughout the solution process and the capability of eliminating equality constraints. In addition, the method provides an algorithm framework that allows the existing cutting-plane techniques to be incorporated into the tightening process. As a demonstration, we will solve a well-known "hard ILP problem".
Selection theorems assert that one can pick a well behaved function from a corresponding multifunction. They play a very important role in modern optimization theory. In Part I, I will survey their structure and some applications before sketching some important applications and open research problems in Part II.
The celebrated Littlewood conjecture in Diophantine approximation concerns the simultaneous approximation of two real numbers by rationals with the same denominator. A cousin of this conjecture is the mixed Littlewood conjecture of de Mathan and Teulié, which is concerned with the approximation of a single real number, but where some denominators are preferred to others.
In the talk, we will derive a metrical result extending work of Pollington and Velani on the Littlewood conjecture. Our result implies the existence of an abundance of numbers satisfying both conjectures.
Selection theorems assert that one can pick a well behaved function from a corresponding multifunction. They play a very important role in modern optimization theory. I will survey their structure and some applications before sketching some important open research problems.
Network infrastructures are a common phenomenon. Network upgrades and expansions typically occur over time due to budget constraints. We introduce a class of incremental network design problems that allow investigation of many of the key issues related to the choice and timing of infrastructure expansions and their impact on the costs of the activities performed on that infrastructure. We focus on the simplest variant: incremental network design with shortest paths, and show that even its simplest variant is NP-hard. We investigate structural properties of optimal solutions, we analyze the worst-case performance of natural greedy heuristics, we derive a 4-approximation algorithm, and we present an integer program formulation and conduct a small computational study.
Joint work withParabolic obstacle problems find applications in the financial markets for pricing American put options. We present a mixed and an equivalent variational inequality hp-interior penalty DG (IPDG) method combined with an hp-time DG (TDG) method to solve parabolic obstacle problems approximatively. The contact conditions are resolved by a biorthogonal Lagrange multiplier and are component-wise decoupled. These decoupled contact conditions are equivlent to finding the root of a non-linear complementary function. This non-linear problem can in turn be solved efficiently by a semi-smooth Newton method. For the hp-adaptivity a p-hierarchical error estimator in conjunction with a local analyticity estimate is employed. For the considered stationary problem, this leads to exponential convergence, and for the instationary problem to greatly improved convergence rates. Numerical experiments are given demonstrating the strengths and limitations of the approaches.
The Discrete Mathematics Instructional Seminar will be getting underway again this Thursday.
We start this talk by introducing some basic definitions and properties relative to geodesic in the setting of metric spaces. After showing some important examples of geodesic metric spaces (which will be used through this talk), we shall define the concept of firmly nonexpansive mappings and we shall prove the existence, under mild conditions, of periodic points and fixed points for this class of mappings. Some of these results unify and generalize previous ones. We shall give a result relative to the $\Delta$-convergence to a fixed point of Picard iterates for firmly nonexpansive mappings, which is obtained from the asymptotic regularity of this class of iterates. Moreover, we shall get an effective rate of asymptotic regularity for firmly nonexpansive mappings (this result is new, as far as we know, even in linear spaces). Finally, we shall apply our results to a minimization problem. More precisely, we shall prove the $\Delta$-convergence to a minimizer of a proximal point-like algorithm when applied to a convex proper lower semi-continuous function defined on a CAT(0) space.
We present a technique for enhancing a progressive hedging-based metaheuristic for a network design problem that models demand uncertainty with scenarios. The technique uses machine learning methods to cluster scenarios and, subsequently, the metaheuristic repeatedly solves multi-scenario subproblems (as opposed to single-scenario subproblems as is done in existing work). With a computational study we see that solving multi-scenario subproblems leads to a significant increase in solution quality and that how you construct these multi-scenario subproblems directly impacts solution quality. We also discuss how scenario grouping can be leveraged in a Benders' approach and show preliminary results of its effectiveness. This is joint work with Theo Crainic and Walter Rei at University of Quebec at Montreal.
Power line communication has been proposed as a possible solution to the "last mile" problem in telecommunications i.e. providing economical high speed telecommunications to millions of end users. As well as the usual background interference (noise), two other types of noise must also be considered for any successful practical implementation of power line communication. Coding schemes have traditionally been designed to deal only with background noise, and in such schemes it is often assumed that background noise affects symbols in codewords independently at random. Recently, however, new schemes have been proposed to deal with the extra considerations in power line communication. We introduce neighbour transitive codes as a group theoretic analogue to the assumption that background noise affects symbols independently at random. We also classify a family of neighbour transitive codes, and show that such codes have the necessary properties to be useful in power line communication.
Integrability theory is the area of mathematics in which methods are developed for the exact solution of partial differential equations, as well as for the study of their properties. We concentrate on PDEs appearing in Physics and other applications. Darboux transformations constitute one of the important methods used in integrability theory and, as well as being a method for the exact solution of linear PDEs, they are an essential part of the method of Lax pairs, used for the solution of non-linear PDEs. A large series of Darboux transformations may be constructed using Wronskians built from some number of individual solutions of the original PDE. In this talk we prove a long-standing conjecture that this construction captures all possible Darboux transformations for transformations of order two, while for transformations of order one the construction captures everything but two Laplace transformations. An introduction into the theory will be provided.
Let $K$ be a complete discrete valuation field of characteristic zero with residue field $k_K$ of characteristic $p > 0$. Let $L/K$ be a finite Galois extension with Galois group $G = \text{Gal}(L/K)$ and suppose that the induced extension of residue fields $k_L/k_K$ is separable. Let $W_n(.)$ denote the ring of $p$-typical Witt vectors of length $n$. Hesselholt [Galois cohomology of Witt vectors of algebraic integers, Math. Proc. Cambridge Philos. Soc. 137(3) (2004), 551557] conjectured that the pro-abelian group ${H^1(G,W_n(O_L))}_{n>0}$ is isomorphic to zero. Hogadi and Pisolkar [On the cohomology of Witt vectors of $p$-adic integers and a conjecture of Hesselholt, J. Number Theory 131(10) (2011), 17971807] have recently provided a proof of this conjecture. In this talk, we present a simplified version of the original proof which avoids many of the calculations present in that version.
Two sets of functions are studied to ascertain whether they are Stieltjes functions and whether they are completely monotonic. The first group of functions are all built from the Lambert $W$ function. The $W$ function will be reviewed briefly. It will be shown that $W$ is Bernstein and various functions containing $W$ are Stieltjes. Explicit expressions for the Stieltjes transforms are obtained. We also give some new results regarding general Stieltjes functions.
The second set of functions were posed as a challenge by Christian Berg in 2002. The functions are $(1+a/x)^{(x+b)}$ for various $a$ and $b$. We show that the functions is Stieltjes for some ranges of $a,b$ and investigate experimentally complete monotonicity for a larger range. We claim an accurate experimental value for the range.
My co-authors are Rob Corless, Peter Borwein, German Kalugin and Songxin Liang.
Graph closures became recently an important tool in Hamiltonian Graph Theory since the use of closure techniques often substantially simplifies the structure of a graph under consideration while preserving some of its prescribed properties (usually of Hamiltonian type). In the talk we show basic ideas of construction of some graph closures for claw-free graphs and techniques that allow to reduce the problem to cubic graphs. The approach will be illustrated on a recently introduced closure concept for Hamilton-connectedness in claw-free graphs and, as an application, an asymptotically sharp Ore-type degree condition for Hamilton-connectedness in claw-free graphs will be obtained.
We consider a general class of convex optimization problems in which one seeks to minimize a strongly convex function over a closed and convex set which is by itself an optimal set of another convex problem. We introduce a gradient-based method, called the minimal norm gradient method, for solving this class of problems, and establish the convergence of the sequence generated by the algorithm as well as a rate of convergence of the sequence of function values. A portfolio optimization example is given in order to illustrate our results.
Having been constructed as trading strategies, option spreads are also used in margin calculations for offsetting positions in options. All option spreads that appear in trading and margining practice have two, three or four legs. As shown in Rudd and Schroeder (Management Sci, 1982), the problem of margining option portfolios where option spreads with two legs are used for offsetting can be solved in polynomial time by network flow algorithms. However, spreads with only two legs do not provide sufficient accuracy in measuring risk. Therefore, margining practice also employs spreads with three and four legs. A polynomial-time solution to the extension of the problem where option spreads with three and four legs are also used for offsetting is not known. We propose a heuristic network-flow algorithm for this extension and present a computational study that demonstrates high efficiency of the proposed algorithm in margining practice.
We consider the problem of packing ellipsoids of different size and shape in an ellipsoidal container so as to minimize a measure of total overlap. The motivating application is chromosome organization in the human cell nucleus. A bilevel optimization formulation is described, together with an algorithm for the general case and a simpler algorithm for the special case in which all ellipsoids are in fact spheres. We prove convergence to stationary points of this nonconvex problem, and describe computational experience. The talk describes joint work with Caroline Uhler (IST, Vienna).
We prove the it is NP-hard for a coalition of two manipulators to compute how to manipulate the Borda voting rule. This resolves one of the last open problems in the computational complexity of manipulating common voting rules. Because of this NP-hardness, we treat computing a manipulation as an approximation problem where we try to minimize the number of manipulators. Based on ideas from bin packing and multiprocessor scheduling, we propose two new approximation methods to compute manipulations of the Borda rule. Experiments show that these methods significantly outperform the previous best known approximation method. We are able to find optimal manipulations in almost all the randomly generated elections tested. Our results suggest that, whilst computing a manipulation of the Borda rule by a coalition is NP-hard, computational complexity may provide only a weak barrier against manipulation in practice.
We also consider Nanson’s and Baldwin’s voting rules that select a winner by successively eliminating candidates with low Borda scores. We theoretically and experimentally demonstrate that these rules are significantly more difficult to manipulate compared to Borda rule. In particular, with unweighted votes, it is NP-hard to manipulate either rule with one manipulator, whilst with weighted votes, it is NP-hard to manipulate either rule with a small number of candidates and a coalition of manipulators.
The minimal degree of a finite group $G$ is the smallest non-negative integer $n$ such that $G$ embeds in $\Sym(n)$. This defines an invariant of the group $\mu(G)$. In this talk, I will present some interesting examples of calculating $\mu(G)$ and examine how this invariant behaves under taking direct products and homomorphic images.
In particular, I will focus on the problem of determining the smallest degree for which we obtain a strict inequality $\mu(G \times H) < \mu(G) + \mu(H)$, for two groups $G$ and $H$. The answer to this questions also leads us to consider the problem of exceptional permutation groups. These are groups $G$ that possess a normal subgroup $N$ such that $\mu(G/N) > \mu(G)$. They are somewhat mysterious in the sense that a particular homomorphic image becomes 'harder' to faithfully represent than the group itself. I will present some recent examples of exceptional groups and detail recent developments in the 'abelian quotients conjecture' which states that $\mu(G/N) < \mu(G)$, whenever $G/N$ is abelian.
Lattice paths effectively model phenomena in chemistry, physics and probability theory. Techniques of analytic combinatorics are very useful in determining asymptotic estimates for enumeration, although asymptotic growth of the number of Self Avoiding Walks on a given lattice is known empirically but not proved. We survey several families of lattice paths and their corresponding enumerative results, both explicit and asymptotic. We conclude with recent work on combinatorial proofs of asymptotic expressions for walks confined by two boundaries.
A Hamilton surface decomposition of a graph is a decomposition of the collection of shortest cycles in such a way that each member of the decomposition determines a surf