The minimal degree of a finite group $G$ is the smallest non-negative integer $n$ such that $G$ embeds in $\Sym(n)$. This defines an invariant of the group $\mu(G)$. In this talk, I will present some interesting examples of calculating $\mu(G)$ and examine how this invariant behaves under taking direct products and homomorphic images.
In particular, I will focus on the problem of determining the smallest degree for which we obtain a strict inequality $\mu(G \times H) < \mu(G) + \mu(H)$, for two groups $G$ and $H$. The answer to this questions also leads us to consider the problem of exceptional permutation groups. These are groups $G$ that possess a normal subgroup $N$ such that $\mu(G/N) > \mu(G)$. They are somewhat mysterious in the sense that a particular homomorphic image becomes 'harder' to faithfully represent than the group itself. I will present some recent examples of exceptional groups and detail recent developments in the 'abelian quotients conjecture' which states that $\mu(G/N) < \mu(G)$, whenever $G/N$ is abelian.
Lattice paths effectively model phenomena in chemistry, physics and probability theory. Techniques of analytic combinatorics are very useful in determining asymptotic estimates for enumeration, although asymptotic growth of the number of Self Avoiding Walks on a given lattice is known empirically but not proved. We survey several families of lattice paths and their corresponding enumerative results, both explicit and asymptotic. We conclude with recent work on combinatorial proofs of asymptotic expressions for walks confined by two boundaries.
A Hamilton surface decomposition of a graph is a decomposition of the collection of shortest cycles in such a way that each member of the decomposition determines a surface (with maximum Euler characteristic). Some sufficient conditions for Hamilton surface decomposition of cartesian products of graphs are obtained. Necessary and sufficient conditions are found for the case when factors are even cycles.
We discuss the asymmetric sandwich theorem, a generalization of the Hahn–Banach theorem. As applications, we derive various results on the existence of linear functionals in functional analysis that include bivariate, trivariate and quadrivariate generalizations of the Fenchel duality theorem. We consider both results that use a simple boundedness hypothesis (as in Rockafellar’s version of the Fenchel duality theorem) and also results that use Baire’s theorem (as in the Robinson–Attouch–Brezis version of the Fenchel duality theorem).
Thomas will be finishing his talks this Thursday where he will finish looking at (parts of) the survey paper Dependent Random Choice by Jacob Fox and Benny Sudakov: http://arxiv.org/abs/0909.3271
Probability densities are a major tool in exploratory statistics and stochastic modelling. I will talk about a numerical technique for the estimation of a probability distribution from scattered data using exponential families and a maximum a-posteriori approach with Gaussian process priors. Using Cameron-Martin theory, it can be seen that density estimation leads to a nonlinear variational problem with a functional defined on a reproducing kernel Hilbert space. This functional is strictly convex. A dual problem based on Fenchel duality will also be given. The (original) problem is solved using a Newton-Galerkin method with damping for global convergence. In this talk I will discuss some theoretical results relating to the numerical solution of the variational problem and the results of some computational experiments. A major challenge is of course the curse of dimensionality which appears when high-dimensional probability distributions are estimated.
In this talk I attempt to explain a general approach in proving irrationality and linear independence results for q-hypergeometric series. An explicit Pade construction is introduced with some (quantitative) arithmetic implications for well-known q-mathematical constants.
We look at (parts of) the survey paper Dependent Random Choice by Jacob Fox and Benny Sudakov: http://arxiv.org/abs/0909.3271. The abstract of the paper says "We describe a simple and yet surprisingly powerful probabilistic technique which shows how to find in a dense graph a large subset of vertices in which all (or almost all) small subsets have many common neighbors. Recently this technique has had several striking applications to Extremal Graph Theory, Ramsey Theory, Additive Combinatorics, and Combinatorial Geometry. In this survey we discuss some of them." My plan for the seminar is to start with a quick recap of the classics of extremal (hyper)graph theory (i.e. Turan, Ramsey, Ramsey-Turan), then look at some simple examples for the probabilistic method in action, and finally come to the striking applications mentioned in the quoted abstract. Only elementary probability is required.
Modular forms have had an important role in number theory for over one hundred years. Modular forms are also of interest in areas such as topology, cryptography and communications network theory. More recently, Peter Sarnak’s August talk, "Chaos, Quantum Mechanics and Number Theory" strongly suggested a link between modular forms and quantum mechanics. In this talk we explain modular forms, in the context of seeking a formula for the number of representations of an integer as the sum of four squares.
The arrival of online casinos in 1996 brought games that you would find at land-based casinos to the computer screens of gamblers all over the world. A major benefit in online casinos is in the automation of systems across several computers for favourable games; as this has the potential to make a significant amount of profit. This article applies this concept to online progressive video poker games. By establishing a set of characteristics to compare different games, analyses are carried out to identify which game should be the starting point for building an automated system. Bankroll management and playing strategies are also analyzed in this article, and are shown to be important components if profiting from online gambling is going to be a long term business.
Within the topic of model-based forecasting with exponential smoothing, this paper seeks to contribute to the understanding of the property of certain stochastic processes to converge almost surely to a constant. It provides a critical discussion of the related views and ideas found in the recent forecasting literature and aims at elucidating the present confusion by review and study of the classical and less known theorems of probability theory and random processes. The paper then argues that a useful role of exponential smoothing for modelling and forecasting sequential count data is limited and methods that are either not based on exponential smoothing or use exponential smoothing in a more flexible way are worthy of exploration. An approach to forecasting such data based on applying exponential smoothing to the probabilities of each count outcome is thus introduced and its merits are discussed in the context of pertinent statistical literature.
In this talk we consider a problem of scheduling several jobs on multiple machines satisfying precedence and resource constraints. Each job has a due date and the objective is to minimize the cumulative weighted tardiness across all jobs. We investigate how to efficiently obtain heuristic solutions on multi-core computers using an Ant Colony Systems framework for the optimisation. The talk will discuss some of the challenges that arise in designing a multi-threaded heuristic and provided computational results for some alternative algorithm variants. The results showing that theACS heuristic is more effective particularly for large problem instances than other methods developed to date.
The relation between mechanics and optimization goes back at least to Euler and was further strengthened by the Lagrangian and Hamiltonian formulations of Newtonian mechanics. Since then, numerous variational formulations of mechanical phenomena have been proposed and although the link to optimization often has been somewhat obscured in the subsequent development of numerical methods, it is in fact as strong as ever. In this talk, I will summarize some of the recent developments in the application of modern mathematical programming methods to problems involving the simulation mechanical phenomena. While the methodology is quite general, emphasis will be on the static and dynamic deformation processes in civil engineering,geomechanics and the earth sciences.
The Feasibility Pump (FP) has proved to be an effective method for finding feasible solutions to Mixed-Integer Programming problems. We investigate the benefits of replacing the rounding procedure with a more sophisticated integer line search that efficiently explores a larger set of integer points with the aim of obtaining an integer feasible solution close to an FP iterate. An extensive computational study on 1000+ benchmark instances demonstrates the effectiveness of the proposed approach.
A common issue when integrating airline planning processes
is the long planning horizon of the crew pairing problem. We propose a
new approach to the crew pairing problem through which we retain a
significant amount of flexibility. This allows us to solve an
integrated aircraft routing, crew pairing, and tail number assignment
problem only few days before the day of operations and with a rolling
planning horizon. The model simultaneously schedules appropriate rest
periods for all crews and maintenance checks for all tail numbers.
A Branch-and-Price method is proposed in which each tail number and
each 'crew block' is formulated as a subproblem.
A water and sewage system, a power grid, a
telecommunication network, are all examples of network
infrastructures. Network infrastructures are a common phenomenon in
many industries. A network infrastructure is characterized by physical
links and connection points. Examples of physical links are pipes
(water and sewage system), fiber optic cables (telecommunication
network), power lines (power grid), and tracks (rail network). Such
network infrastructures have to be maintained and, often, have to be
upgraded or expanded. Network upgrades and expansions typically occur
over a period of time due to budget constraints and other
considerations. Therefore, it becomes important to determine both when
and where network upgrades and expansions should take place so as to
minimize the infrastructure investment as well as current and future
operational costs.
We introduce a class of multi-period network infrastructure expansion
problems that allow us to study the key issues related to the choice
and timing of infrastructure expansions and their impact on the costs
of the activities performed on that infrastructure. We focus on the
simplest variant, an incremental shortest path problem (ISPP). We show
that even ISPP is NP-hard, we introduce a special case that is
polynomially solvable, we derive structural solution properties, we
present an integer programming formulation and classes of valid
inequalities, and discuss the results of a computational study.
The classical single period problem (SPP) has wide applicability especially in service industries which dominates the economy. In this paper a single period production problem, is considered, as a specific type of SPP. The SPPmodel is extended by considering the probability of scrap and rework in production at the beginning and during the period. The optimal solution which maximizes the expected value of total profit obtained. In the case of producing the scrap items and defective items which should rework, the optimal profit of system in comparison to ideal production system reduces. Also, the reduction of profit is more sensitive by increasing the probability of producing scrap items in comparison with the probability of producing defective items. These results would help the managers in order to make the right decision about changing or revising machines or technologies.
In this presentation I briefly discuss practical and philosophical issues related to the role of the peer-review process in maintaining the quality of scientific publications. The discussion is based on, among other things, my experience over the past eight years in containing the spread of voodoo decision theories in Australia. To motivate the discussion, I ask: how do you justify the use a model of local robustness (operating in the neighborhood of a wild guess) to manage Black Swans and Unknown Unknowns?
Design of a complex system needs both micro and macro level competencies to capture the underlying structure of complex problem satisfying convergence to good solution point. Systems such as complex organizations, complex New Product Development (NPD) and complex network of firms (Supply Chains or SC) require competencies at both macro (coordination and integration) and micro (capable designers, teams for NPD and capable firms in SC) entities. Given high complexity in such problems at both macro and micro levels, a couple of different errors can happen at each: 1) Either acceptance of a wrong solution or rejection of a right solution at micro level. 2) Either coordination of a couple of entities that do not need any coordination [e.g. teams or designers working in NPD might put too much time in meetings and firms in SC might lose their flexibility due to limitations from powerful and leader firms in SC] or lack of deployed resources for entities that need coordination [e.g. inconsistencies in decisions made in decentralized systems such as NPD and SC]. In this paper a simple and parsimonious Agent Based Model (ABM) of NK type is build and simulated to study these complex interactive systems. The results of simulations provide some insights on imperfect management of above mentioned complex systems. For instance, we found that asymmetry in any of the above mentioned errors favours a particular policy in management of these systems.
A problem that frequently arises in environmental surveillance is where to place a set of sensors in order to maximise collected information. In this article we compare four methods for solving this problem: a discrete approach based on the classical k-median location model, a continuous approach based on the minimisation of the prediction error variance, an entropy-based algorithm, and simulated annealing. The methods are tested on artificial data and data collected from a network of sensors installed in the Springbrook National Park in Queensland, Australia, for the purpose of tracking the restoration of biodiversity. We present an overview of these methods and a comparison of results.
This talk presents an innovative model for describing the effects of QM on organizational productivity, traditionally researched by statistical models. Learning inside organizations combined with the information processing metaphor of organizations is applied to build a computational model for this research. A reinforcement learning (RL) algorithm is implemented in the computational model to characterize the effects of quality leadership on productivity. The results show that effective quality leadership, being a balanced combination of exploration of new actions and exploitation of previous good actions, outperform pure exploration or exploitation strategies in the long run. However, pure exploitation outperforms the exploration and RL algorithms in the short term. Furthermore, the effects of complexity of customer requirements on productivity are investigated. From the results it can be argued that more complexity usually leads to less productivity. Also, the gap between random action algorithm and RL is reduced when the complexity of customer requirements increases. As regards agent types, it can be inferred that well- balanced business processes comprised of similar agents (in terms of agents’ processing time and accuracy) perform better than other scenarios.
We look at (parts of) the survey paper Dependent Random Choice by Jacob Fox and Benny Sudakov: http://arxiv.org/abs/0909.3271
The abstract of the paper says "We describe a simple and yet surprisingly powerful probabilistic technique which shows how to find in a dense graph a large subset of vertices in which all (or almost all) small subsets have many common neighbors. Recently this technique has had several striking applications to Extremal Graph Theory, Ramsey Theory, Additive Combinatorics, and Combinatorial Geometry. In this survey we discuss some of them."
My plan for the seminar is to start with a quick recap of the classics of extremal (hyper)graph theory (i.e. Turan, Ramsey, Ramsey-Turan), then look at some simple examples for the probabilistic method in action, and finally come to the striking applications mentioned in the quoted abstract.
Only elementary probability is required.
We interpret the Hamiltonian Cycle problem (HCP) as a an optimisation problem with the determinant objective function, naturally arising from the embedding of HCP into a Markov decision process. We also exhibit a characteristic structure of the class of all cubic graphs that stems from the spectral properties of their adjacency matrices and provide an analytic explanation of this structure.
I shall describe highlights of my two decades of experience with Advanced Collaborative Environments (ACEs) in Canada and Australia, running shared seminars, conferences, resources and courses over the internet. I shall also describe the AMSI Virtual Lab proposal which has just been submitted to NeCTAR. The slides for much of this talk are at http://www.carma.newcastle.edu.au/jon/aces11.pdf.
This week we shall conclude our look at the paper by Dave Witte on Hamilton directed cycles in Cayley digraphs of prime power order.
We shall start looking at Dave Witte's (now Dave Morris) proof that connected Cayley digraphs of prime power order have Hamilton directed cycles.
Mean periodic functions of a single real variable were an innovation of the mid-20th century. Although not as well known as almost periodic functions, they have some nice properties, with applications to certain mean value theorems.
Noncommutative geometry is based on fairly sophisticated methods: noncommutative C*-algebras are called noncommutative topological spaces, noncommutative von Neumann algebras are noncommutative measure spaces, and Hopf algebras and homological invariants describe the geometry. Standard topology, on the other hand, is based on naive intuitions about discontinuity: a continuous function is one whose graph does not have any gaps, and cutting and gluing are used to analyse and reconstruct geometrical objects. This intuition does not carry over to the noncommutative theory, and the dictum from quantum mechanics that it does not make sense any more to think about point particles perhaps explains a lack of expectation that it should. The talk will describe an attempt to make this transfer by computing the polar decompositions of certain operators in the group C*-algebras of free groups. The computation involves some identities and evaluations of integrals that might interest the audience, and the polar decomposition may be interpreted as a noncommutative version of the double angle formula familiar from high school geometry.
The talk will begin with a reminder of what a triangulated category is, the context in which they arose, and why we care about them. Then we will discuss the theory of compactly generated triangulated categories, again illustrating the applications. This theory is old and well understood. Finally we will come to well generated categories, where several open problems remain very mysterious.
The choice of a plan for radiotherapy treatment for an individual cancer patient requires the careful trade-off between the goals of delivering a sufficiently high radiation dose to the tumour and avoiding irradiation of critical organs and normal tissue. This problem can be formulated as a multi-objective linear programme (MOLP). In this talk we present a method to compute a finite set of non-dominated points that can be proven to uniformly cover the complete non-dominated set of an MOLP (a finite representation). This method generalises and improves upon two existing methods from the literature. We apply this method to the radiotherapy treatment planning problem, showing some results for clinical cases. We illustrate how the method can be used to support clinician’s decision making when selecting a treatment plan. The treatment planner only needs to specify a threshold for recognising two treatment plans as different and is able to interactively navigate through the representative set without the trial-and-error process often used in practice today.
In this seminar talk we will recall results on the Drop property in Banach spaces and study it in geodesic spaces. In particular, we will show a variant of Danes Drop Theorem in Busemann convex spaces and derive well-posedness results about minimization of convex functions. This talk is based a joint work with Adriana Nicolae.
In Euclidean geometry, Ptolemy's theorem is a relation between the four sides and two diagonals of a cyclic quadrilateral (a quadrilateral whose vertices lie on a common circle). If the quadrilateral is given with its four vertices $A$, $B$, $C$, and $D$ in order, then the theorem states that: $$|AC| \cdot |BD| = |AB| \cdot |CD| + |AD| \cdot |BC|.$$ Furthermore, it is well known that in every Euclidean (or Hilbert) space $H$ we have that $$||x - y|| \cdot ||z - w|| \leq ||x - z|| \cdot ||y - w|| + ||z - y|| \cdot ||x - w||$$ for any four points $w, x, y, z \in H$. This is the classical Ptolemy inequality and it is well-known that it characterizes the inner product spaces among all normed spaces. A Ptolemy metric space is any metric space for which the same inequality holds, replacing norms by distances, for any four points. CAT(0) spaces are geodesic spaces of global nonpositive curvature in the sense of Gromov. Hilbert spaces are CAT(0) spaces and, even more, CAT(0) spaces have many common properties with Hilbert spaces. In particular, although a Ptolemy geodesic metric space need not be CAT(0), any CAT(0) space is a Ptolemy metric space. In this expository talk we will show some recent progress about the connection between Ptolemy metric spaces and CAT(0) spaces.
This week the discrete mathematics instructional seminar will continue with a consideration of the Lovasz problem. This is the last meeting of the seminar until 13 October.
One of the most intriguing problems in metric fixed point theory is whether we can find closed convex and unbounded subsets of Banach spaces with the fixed point property. A celebrated theorem due to W.O. Ray in 1980 states that this cannot happen if the space is Hilbert. This problem was so poorly understood that two antagonistic questions were raised: the first one was if this property characterizes Hilbert spaces within the class of Banach spaces, while the second one asked if this property characterizes any space at all, that is, if Ray's theorem states in any Banach space. The second problem is still open but the first one has recently been answered in the negative by T. Domínguez Benavides after showing that Ray's theorem also holds true in the classical space of real sequences $c_0$.
The situation seems, however, to be completely different for CAT(0) spaces. Although Hilbert spaces are a particular class of CAT(0) spaces, there are different examples of CAT(0) spaces, including $\mathbb{R}$-trees, in the literature for which we can find closed convex and unbounded subsets with the fixed point property. In this talk we will look closely at this problem. First, we will introduce a geometrical condition inspired in the Banach-Steinhaus theorem for CAT(0) spaces under which we can still assure that Ray's theorem holds true. We will provide different examples of CAT(0) spaces with this condition but we will notice that all these examples are of a very strong Hilbertian nature. Then we will look at $\delta$-hyperbolic geodesic spaces. If looked from very far these spaces, if unbounded, resemble $\mathbb{R}$-trees, therefore it is natural to try to find convex closed and unbounded subsets with the fixed point property in these spaces. We will present some partial results in this direction.
This talk is based a joint work with Bożena Piątek.
We are interested in local geometrical properties of a Banach space which are preserved under natural embeddings in all even dual spaces. An example of this behaviour which we generalise is:
if the norm of the space $X$ is Fréchet differentiable at $x \in S(X)$ then the norm of the second dual $X^{**}$ is Fréchet differentiable at $\hat{x}\in S(X)$ and of $X^{****}$ at $\hat{\hat{x}} \in S(X^{****})$ and so on....
The results come from a study of Hausdorff upper semicontinuity properties of the duality mapping characterising general differentiability conditions satisfied by the norm.
The topic is Lovasz's Conjecture that all connected vertex-transitive graphs have Hamilton paths.
We resolve and further study a sinc integral evaluation, first posed in The American Mathematical Monthly in [1967, p. 1015], which was solved in [1968, p. 914] and withdrawn in [1970, p. 657]. After a short introduction to the problem and its history, we give a general evaluation which we make entirely explicit in the case of the product of three sinc functions. Finally, we exhibit some general structure of the integrals in question.
Basically, a function is Lipschitz continuous if it has a bounded slope. This notion can be extended to set-valued maps in different ways. We will mainly focus on one of them: the so-called Aubin (or Lipschitz-like) property. We will employ this property to analyze the iterates generated by an iterative method known as the proximal point algorithm. Specifically, we consider a generalized version of this algorithm for solving a perturbed inclusion $$y \in T(x),$$ where $y$ is a perturbation element near 0 and $T$ is a set-valued mapping. We will analyze the behavior of the convergent iterates generated by the algorithm and we will show that they inherit the regularity properties of $T$, and vice versa. We analyze the cases when the mapping $T$ is metrically regular (the inverse map has the Aubin property) and strongly regular (the inverse is locally a Lipschitz function). We will not assume any type of monotonicity.
This Thursday is your chance to start anew! I shall be starting a presentation of the best work that has been done on Lovasz's famous 1979 problem (now a conjecture) stating that every connected vertex-transitive graph has a Hamilton path. This is good stuff and requires minimal background.
This week we shall conclude the proof of the uniqueness of the Hoffman-Singleton graph.
We introduce the concept and several examples of q-analogs. A particular focus is on the q-binomial coefficients, which are characterized in a variety of ways. We recall classical binomial congruences and review their known q-analogs. Finally, we establish a full q-analog of Ljunggren's congruence which states that (a*p choose b*p) is congruent to (a choose b) mod p^3.
20-minute presentation followed by 10 minutes of questions and discussion.
Given a mixed-up sequence of distinct numbers, say 4 2 1 5 7 3 6, can you pass them through an infinite stack (first-in-last-out) from right to left, and put them in order?
2 1 5 7 3 6 | | | 4 | |___| 1 5 7 3 6 | | | 2 | | 4 | |___| 1 5 7 3 6 | | | 2 | | 4 | |___| 1 2 5 7 3 6 | | | 4 | |___|umm….. This talk will be about this problem - when can you do it with one stack, two stacks (in series), an infinite and a finite capacity stack in series, etc etc? How many permutations of 1,2,...,n are there that can be sorted? The answer will lie in the "forbidden subpatterns" of permutations, and it turns out there is a whole theory of this, which I will try to describe.
We continue looking at the 1960 Hoffman-Singleton paper about Moore graphs and related topics.
It is shown that, for maximally monotone linear relations defined on a general Banach space, the monotonicities of dense type, of negative-infimum type, and of Fitzpatrick-Phelps type are the same and equivalent to monotonicity of the adjoint. This result also provides affirmative answers to two problems: one posed by Phelps and Simons, and the other by Simons.
Infinite index subgroups of integer matrix groups like $SL(n,Z)$ which are Zariski dense in $SL(n)$ arise in geometric diophantine problems (e.g., integral Apollonian packings) as well as monodromy groups associated with families of varieties. One of the key features needed when applying such groups to number theoretic problems is that the congruence graphs associated with these groups are "expanders". We will introduce and explain these ideas and review some recent developments especially those connected with the affine sieve.
The most important open problem in Monotone Operator Theory concerns the maximal monotonicity of the sum of two maximally monotone operators provided that Rockafellar's constraint qualification holds. In this talk, we prove the maximal monotonicity of the sum of a maximally monotone linear relation and the subdifferential of a proper lower semicontinuous convex function satisfying Rockafellar's constraint qualification. Moreover, we show that this sum operator is of type (FPV).
This paper considers designing permission sets to influence the project selection decision made by
a better-informed agent. The project characteristics are two-dimensional. The principal can verify the characteristics of the project selected by the agent. However, the principal cannot observe the number and characteristics of those projects that the agent could, but does not, propose. The payoffs to the agent and the principal are different. Using calculus of variations, we solve the optimal permission set, which can be characterized by a threshold function. We obtain comparative statics on the preference alignment and expected number of projects available. When outcome-based incentives are feasible, we discuss the use of financial inducement to maximize the social welfare. We also extend our analysis to two cases: 1) when one of the project characteristics is unobservable; and 2) when there are multiple agents with private preferences and the principal must establish a universal permission set.
Key words: calculus of variations, optimal permission set, project management.
In my talk I will try to overview ideas behind (still recent) achievements on arithmetic properties of numbers $\zeta(s)=\sum_{n=1}^\infty n^{-s}$ for integral $s\ge2$, with more emphasis on odd $s$. The basic ingredients of proofs are generalized hypergeometric functions and linear independence criteria. I will also address some "most recent" results and observations in the subject, as well as connections with other problems in number theory.
These talks are aimed at extending the undergraduates' field of vision, or increase their level of exposure to interesting ideas in mathematics. We try to present topics that are important but not covered (to our knowledge) in undergraduate coursework. Because of the brevity and intended audience of the talks, the speaker generally only scratches the surface, concentrating on the most interesting aspects of the topic.
We will investigate the existence of common fixed points for point-wise Lipschitzian semigroups of nonlinear mappings $Tt : C - C$ where $C$ is a bounded, closed, convex subset of a uniformly convex Banach space $X$, i.e. a family such that $T0(x) = x$, $Ts+t = Ts(Tt(x))$, where each $Tt$ is pointwise Lipschitzian, i.e. there exists a family of functions $at : C - [0;x)$ such that $||Tt(x)-Tt(y)|| < at(x)||x-y||$ for $x$, $y \in C$. We will also demonstrate how the asymptotic aspect of the pointwise Lipschitzian semigroups can be expressed in terms of the respective Frechet derivatives. We will discuss some questions related to the weak and strong convergence of certain iterative algorithms for the construction of the stationary and periodic points for such semigroups.
One of the most effective avenues in recent experimental mathematics research is the computational of definite integrals to high precision, followed by the identification of resulting numerical values as compact analytic formulas involving well-known constants and functions. In this talk we summarize several applications of this methodology in the realm of applied mathematics and mathematical physics, in particular Ising theory, "box integrals", and the study of random walks.
This talk concerns developing a numerical method of the Newton type to solve systems of nonlinear equations described by nonsmooth continuous functions. We propose and justify a new generalized Newton algorithm based on graphical derivatives, which have never been used to derive a Newton-type method for solving nonsmooth equations. Based on advanced techniques of variational analysis and generalized differentiation, we establish the well-posedness of the algorithm, its local superlinear convergence, and its global convergence of the Kantorovich type. Our convergence results hold with no semismoothness and Lipschitzian assumptions, which is illustrated by examples. The algorithm and main results obtained in the paper are compared with well-recognized semismooth and $B$-differentiable versions of Newton's method for nonsmooth Lipschitzian equations.
This paper studies combinations of the Riemann zeta function, based on one defined by P.R. Taylor, and shown by him to have all its zeros on the critical line. With a rescaled complex argument, this is denoted here by ${\cal T}_-(s)$, and is considered together with a counterpart function ${\cal T}_+(s)$, symmetric rather than antisymmetric about the critical line. We prove by a graphical argument that ${\cal T}_+(s)$ has all its zeros on the critical line, and that the zeros of both functions are all of first order. We also establish a link between the zeros of ${\cal T}_-(s)$ and of ${\cal T}_+s)$ with zeros of the Riemann zeta function $\zeta(2 s-1)$, and between the distribution functions of the zeros of the three functions.
The concept of orthogonal double covers (ODC) of graphs originates in questions concerning database constraints and problems in statistical combinatorics and in design theory. An ODC of the complete graph $K_n$ by a graph $G$ is a collection of $n$ subgraphs of $K_n$, all isomorphic to $G$, such that any two of them share exactly one edge, and every edge of $K_n$ occurs in exactly two of the subgraphs. We survey some of the main results and conjectures in the area as well as constructions, generalizations and modifications of ODC.
The talk will focus on recent results or work in progress, with some open problems which span both Combinatorial Design and Sperner Theory. The work focuses upon the duality between antichains and completely separating systems. An antichain is a collection $\cal A$ of subsets of $[n]=\{1,...,n\}$ such that for any distinct $A,B\in\cal A$, $A$ is not a subset of $B$. A $k$-regular antichain on $[m]$ is an antichain in which each element of $[m]$ occurs exactly $k$ times. A CSS is the dual of an antichain. An $(n,k)CSS \cal C$ is a collection of blocks of size $k$ on $[n]$, such that for each distinct $a,b\in [n]$ there are sets $A,B \in \cal C$ with $a \in A-B$ and $b \in B-A$. The notions of $k$-regular antichains of size $n$ on $[m]$ and $(n,k)CSS$s in $m$ blocks are dual concepts. Natural questions to be considered include: Does a $k$-regular antichain of size $n$ exist on $[m]$? For $k
High-dimensional integrals come up in a number of applications like statistics, physics and financial mathematics. If explicit solutions are not known, one has to resort to approximative methods. In this talk we will discuss equal-weight quadrature rules called quasi-Monte Carlo. These rules are defined over the unit cube $[0,1]^s$ with carefully chosen quadrature points. The quadrature points can be obtained using number-theoretic and algebraic methods and are designed to have low discrepancy, where discrepancy is a measure of how uniformly the quadrature points are distributed in $[0,1]^s$. In the one-dimensional case, the discrepancy coincides with the Kolmogorov-Smirnov distance between the uniform distribution and the empirical distribution of the quadrature points and has also been investigated in a paper by Weyl published in 1916.
Let $T$ be a topological space (a compact subspace of ${\mathbb R^m}$, say) and let $C(T)$ be the space of real continuous functions on $T$, equipped with the uniform norm: $||f|| = \text{max}_{t\in T}|f(t)|$ for all $f \in C(T)$. Let $G$ be a finite-dimensional linear subspace of $C(T)$. If $f \in C(T)$ then $$d(f,G) = \text{inf}\{||f−g|| : g \in G\}$$ is the distance of $f$ from $G$, and $$P_G(f) = \{g \in G : ||f−g|| = d(f,G)\}$$ is the set of best approximations to $f$ from $G$. Then $$P_G : C(T) \rightarrow P(G)$$ is the set-valued metric projection of $C(T)$ onto $G$. In the 1850s P. L. Chebyshev considered $T = [a, b]$ and $G$ the space of polynomials of degree $\leq n − 1$. Our concern is with possible properties of $P_G$. The historical development, beginning with Chebyshev, Haar (1918) and Mairhuber (1956), and the present state of knowledge will be outlined. New results will demonstrate that the story is still incomplete.
The Landau-Lifshitz-Gilbert equation (LLGE) comes from a model for the dynamics of the magnetization of a ferromagnetic material. In this talk we will first describe existing finite element methods for numerical solution of the deterministic and stochastic LLGEs. We will then present another finite element solution to the stochastic LLGE. This is a work in progress jointly with B. Goldys and K-N Le.
The aim of this talk is to demonstrate how cyclic division algebras and their orders can be used to enhance wireless communications. This is done by embedding the information bits to be transmitted into smart algebraic structures, such as matrix representations of order lattices. We will recall the essential algebraic definitions and structures, and further familiarize the audience with the transmission model of fading channels. An example application of major current interest is digital video broadcasting. Examples suitable to this application will be provided.
In the late seventies, Bill Thurston defined a semi-norm on the homology of a 3-dimensional manifold which lends itself to the study of manifolds which fibre over the circle. This led him to formulate the Virtual Fibration Conjecture, which is fairly inscrutable and implies almost all major results and conjectures in the field. Nevertheless, Thurston gave the conjecture "a definite chance for a positive answer" and much research is currently devoted to it. I will describe the Thurston norm, its main properties and applications, as well as its relationship to McMullen’s Alexander norm and the geometric invariant for groups due to Bieri, Neumann and Strebel.
We resolve some recent and fascinating conjectural formulae for $1/\pi$ involving the Legendre polynomials. Our mains tools are hypergeometric series and modular forms, though no prior knowledge of modular forms is required for this talk. Using these we are able to prove some general results regarding generating functions of Legendre polynomials and draw some unexpected number theoretic connections. This is joint work with Heng Huat Chan and Wadim Zudilin. The authors dedicate this paper to Jon Borwein's 60th birthday.
The elements of a free group are naturally considered to be reduced "words" in an certain alphabet. In this context, a palindrome is a group element which reads the same from left-to-right and right-to-left. Certain primitive elements, elements that can be part of a basis for the free group, are palindromes. We discuss these elements, and related automorphisms.
We meet this Thursday at the usual time when I will show you a nice application of the Edmonds-Fulkerson matroid partition theorem, namely, I'll prove that Paley graphs have Hamilton decompositions (an unpublished result).
We introduce, assuming only a modest background in one variable complex analysis, the rudiments of infinite dimensional holomorphy. Approaches and some answers to elementary questions arising from considering monomial expansions in different settings and spaces are used to sample the subject.
The Chaney-Schaefer $\ell$-tensor product $E\tilde{\otimes}_{\ell}Y$ of a Banach lattice $E$ and a Banach space $Y$ may be viewed as an extension of the Bochner space $L^p(\mu,Y) (1\leq p < \infty)$. We consider an extension of a classical martingale characterization of the Radon Nikodým property in $L^p(\mu,Y)$, for $1 < p < 1$, to $E\tilde{\otimes}_{\ell}Y$. We consider consequences of this extension, and time permitting, use it to represent set-valued measures of risk dened on Banach lattice-valued Orlicz hearts.
You are invited to celebrate the life and work of Paul Erdős!
NUMS and CARMA are holding a "Meet Paul Erdős Night" on Wednesday the 20th April starting at 4pm in V07 and we'd love you to come. You can view a poster with information about the night here.
Please RSVP by next Friday 15th April so that we can cater appropriately. To RSVP, reply to: nums@newcastle.edu.au
On wednesday afternoon, we will be visited by Dr Stephen Hardy and Dr Kieran Larkin from Canon Information Systems Research Australia, Sydney. Drs Larkin and Hardy will be here to explore research opportunities with University of Newcastle researchers. To familiarise them with what we do and to help us understand what they do, there will be three short talks, giving information on the functions and activities of the CDSC, CARMA and Canon's group of 45 researchers. All are welcome to participate.
This week we shall start the classical paper by Jack Edmonds and DR Fulkerson on partitioning matroids.
The demiclosedness principle is one of the key tools in nonlinear analysis and fixed point theory. In this talk, this principle is extended and made more flexible by two mutually orthogonal affine subspaces. Versions for finitely many (firmly) nonexpansive operators are presented. As an application, a simple proof of the weak convergence of the Douglas-Rachford splitting algorithm is provided.
In my talk I will review some recent progress on evaluations of Mahler measures via hypergeometric series and Dirichlet L-series. I will provide more details for the case of the Mahler measure of $1+x+1/x+y+1/y$, whose evaluation was observed by C. Deninger and conjectured by D. Boyd (1997). The main ingredients are relations between modular forms and hypergeometric series in the spirit of Ramanujan. The talk is based on joint work with Mat Rogers.
The stochastic Loewner evolution (SLE) is a one-parameter family of random growth processes in the complex plane introduced by the late Oded Schramm in 1999 which is predicted to describe the scaling limit of a variety of statistical physics models. Recently a number of rigorous results about such scaling limits have been established; in fact, Wendelin Werner was awarded the Fields Medal in 2006 for "his contributions to the development of stochastic Loewner evolution, the geometry of two-dimensional Brownian motion, and conformal field theory" and Stas Smirnov was awarded the Fields Medal in 2010 "for the proof of conformal invariance of percolation and the planar Ising model in statistical physics." In this talk, I will introduce some of these models including the Ising model, self-avoiding walk, loop-erased random walk, and percolation. I will then discuss SLE, describe some of its basic properties, and touch on the results of Werner and Smirnov as well as some of the major open problems in the area. This talk will be "colloquium style" and is intended for a general mathematics audience.
We shall conclude the discussion of some of the mathematics surrounding Birkhoff's Theorem about doubly stochastic matrices.
This talk will present recent theoretical and experimental results contrasting quantum randomness with pseudo-randomness.
The Mahler measure of a polynomial of several variables has been a subject of much study over the past thirty years. Very few closed forms are proven but more are conjectured. We provide systematic evaluations of various higher and multiple Mahler measures using log-sine integrals. We also explore related generating functions for the log-sine integrals. This work makes frequent use of “The Handbook” and involves extensive symbolic computations.
I will continue by showing relationships between Mahler measures and logsine integrals [PDF]. This should be comprehensible whether or not you heard Part 1.
In the 80's R.Grigorchuk found a finitely generated group such that the number of elements that can be written as a product of at most \(n\) generators grows faster than any polynomial in \(n\), but slower than any exponential in \(n\), so-called "intermediate" growth.
It can be described as an group of automorphisms of an infinite rooted binary tree, or in terms of abstract computing devices called "non-initial finite transducers".
In this talk I will describe what some of these short words/products of generators look like, and speculate on the asymptotic growth rate of all short words of length \(n\).
This is joint unpublished work with Mauricio Gutierrez (Tufts) and Zoran Sunic (Texas A&M).
The Mahler measure of a polynomial of several variables has been a subject of much study over the past thirty years. Very few closed forms are proven but more are conjectured. We provide systematic evaluations of various higher and multiple Mahler measures using log-sine integrals. We also explore related generating functions for the log-sine integrals. This work makes frequent use of “The Handbook” and involves extensive symbolic computations.
Co-author: Thomas Prellberg (Queen Mary, University of London)
Various kinds of paths on lattices are often used to model polymers. We describe some partially directed path models for which we find the exact generating functions, using instances of the `kernel method'. In particular, motivated by recent studies of DNA unzipping, we find and analyze the generating function for pairs of non-crossing partially directed paths with contact interactions. Although the expressions involving two-path problem are unweildy and tax the capacities of Maple and Mathematica, we are still able to gain an understanding of the singularities of the generating function which govern the behaviour of the model.
Professor Jonathan Borwein shares with us his passion for \(\pi\), taking us on a journey through its rich history. Professor Borwein begins with approximations of \(\pi\) by ancient cultures, and leads us through the work of Archimedes, Newton and others to the calculation of \(\pi\) in today's age of computers.
Professor Borwein is currently Laureate Professor in the School of Mathematical and Physical Sciences at the University of Newcastle. His research interests are broad, spanning pure, applied and computational mathematics and high-performance computing. He is also Chair of the Scientific Advisory Committee at the Australian Mathematical Sciences Institute (AMSI).
This talk will be broadcast from the Access Grid room V206 at the University of Newcastle, and will link to the West coast of Canada.
For more information visit AMSI's Pi Day website or read Jon Borwein's talk.
Concave utility functions and convex risk measures play crucial roles in economic and financial problems. The use of concave utility function can at least be traced back to Bernoulli when he posed and solved the St. Petersburg wager problem. They have been the prevailing way to characterize rational market participants for a long period of time until the 1970’s when Black and Scholes introduced the replicating portfolio pricing method and Cox and Ross developed the risk neutral measure pricing formula. For the past several decades the `new paradigm’ became the main stream. We will show that, in fact, the `new paradigm’ is a special case of the traditional utility maximization and its dual problem. Moreover, the convex analysis perspective also highlights that overlooking sensitivity analysis in the `new paradigm’ is one of the main reason that leads to the recent financial crisis. It is perhaps time again for bankers to learn convex analysis.
The talk will be divided into two parts. In the first part we layout a discrete model for financial markets. We explain the concept of arbitrage and the no arbitrage principle. This is followed by the important fundamental theorem of asset pricing in which the no arbitrage condition is characterized by the existence of martingale (risk neutral) measures. The proof of this gives us a first taste of the importance of convex analysis tools. We then discuss how to use utility functions and risk measures to characterize the preference of market agents. The second part of the talk focuses on the issue of pricing financial derivatives. We use simple models to illustrate the idea of the prevailing Black -Scholes replicating portfolio pricing method and related Cox-Ross risk-neutral pricing method for financial derivatives. Then, we show that the replicating portfolio pricing method is a special case of portfolio optimization and the risk neutral measure is a natural by-product of solving the dual problem. Taking the convex analysis perspective of these methods h
Concave utility functions and convex risk measures play crucial roles in economic and financial problems. The use of concave utility function can at least be traced back to Bernoulli when he posed and solved the St. Petersburg wager problem. They have been the prevailing way to characterize rational market participants for a long period of time until the 1970’s when Black and Scholes introduced the replicating portfolio pricing method and Cox and Ross developed the risk neutral measure pricing formula. For the past several decades the `new paradigm’ became the main stream. We will show that, in fact, the `new paradigm’ is a special case of the traditional utility maximization and its dual problem. Moreover, the convex analysis perspective also highlights that overlooking sensitivity analysis in the `new paradigm’ is one of the main reason that leads to the recent financial crisis. It is perhaps time again for bankers to learn convex analysis.
The talk will be divided into two parts. In the first part we layout a discrete model for financial markets. We explain the concept of arbitrage and the no arbitrage principle. This is followed by the important fundamental theorem of asset pricing in which the no arbitrage condition is characterized by the existence of martingale (risk neutral) measures. The proof of this gives us a first taste of the importance of convex analysis tools. We then discuss how to use utility functions and risk measures to characterize the preference of market agents. The second part of the talk focuses on the issue of pricing financial derivatives. We use simple models to illustrate the idea of the prevailing Black -Scholes replicating portfolio pricing method and related Cox-Ross risk-neutral pricing method for financial derivatives. Then, we show that the replicating portfolio pricing method is a special case of portfolio optimization and the risk neutral measure is a natural by-product of solving the dual problem. Taking the convex analysis perspective of these methods h
"In this talk I'll exhibit the interplay between Selberg integrals (interpreted broadly) and random matrix theory. Here an important role is played by the basic matrix operations of a random corank 1 projection (this reduces the number of nonzero eigenvalues by one) and bordering (this increases the number of eigenvalues by one)."
This is a discrete mathematics instructional seminar commencing 24 February--to meet on subsequent Thursdays from 3:00-4:00 p.m. The seminar will focus on "classical" papers and portions of books.
TBA
CARMA is currently engaged in several shared projects with the IRMACS Centre and the OCANA Group UBC-O, both in British Columbia, Canada. This workshop will be an opportunity to learn about irmacs, Centres and to experience the issues in collaborating for research and teaching across the Pacific.
This will be followed by discussion and illustrations of collaboration, technology, teaching and funding etc.
Cross Pacific Collaboration pages at irmacs.
McCullough-Miller space X = X(W) is a topological model for the outer automorphism group of a free product of groups W. We will discuss the question of just how good a model it is. In particular, we consider circumstances under which Aut(X) is precisely Out(W).
The talk will explain joint work with Yehuda Shalom showing that the only homomorphisms from certain arithmetic groups to totally disconnected, locally compact groups are the obvious, or naturally occurring, ones. For these groups, this extends the supperrigidity theorem that G. Margulis proved for homomorphisms from high rank arithmetic groups to Lie groups. The theorems will be illustrated by referring to the groups $SL_3(\mathbb{Z})$, $SL_2(\mathbb{Z}[\sqrt{2}])$ and $SL_3(\mathbb{Q})$.
Research, as an activity, is fundamentally collaborative in nature. Driven by the massive amounts of data that are produced by computational simulations and high resolution scientific sensors, data-driven collaboration is of particular importance in the computational sciences. In this talk, I will discuss our experiences in designing, deploying, and operating an Canada wide advanced collaboration infrastructure in the support of the computational sciences. In particular, I will focus on the importance of data in such collaborations and discuss how current collaboration tools are sorely lacking in their support of data-centric collaboration.
We describe an integrated model for TCP/IP protocols with multipath routing. The model combines a Network Utility Maximization for rate control, with a Markovian Traffic Equilibrium for routing. This yields a cross-layer design in which sources minimize the expected cost of sending flows, while routers distribute the incoming flow among the outgoing links according to a discrete choice model. We prove the existence of a unique equilibrium state, which is characterized as the solution of an unconstrained strictly convex program of low dimension. A distributed algorithm for solving this optimization problem is proposed, with a brief discussion of how it can be implemented by adapting current Internet protocols.
Talk in [PDF]
We discuss a short-term revenue optimization problem that involves the optimal targeting of customers for a promotional sale in which a finite number of perishable items are offered on a last-minute offer. The goal is to select the subset of customers to whom the offer will be made available, maximizing the expected return. Each client replies with a certain probability and reports a specific value that depends on the customer type, so that the selected subset has to balance the risk of not selling all the items with the risk of assigning an item to a low value customer. Selecting all those clients with values above a certain optimal threshold may fail to achieve the maximal revenue. However, using a linear programming relaxation, we prove that such threshold strategies attain a constant factor of the optimal value. The achieved factor is ${1\over 2}$ when a single item is to be sold, and approaches 1 as the number of available items grows to infinity. Furthermore, for the single item case, we propose an upper bound based on an exponential size linear program that allows us to get a threshold strategy achieving at least ${2\over 3}$ of the optimal revenue. Computational experiments with random instances show a significantly better performance than the theoretical predictions.
Talk in [PDF]
Various Vacation Scholars, HRD students and CARMA RAs will report on their work. This involves visualization and computation, practice and theory. Everyone is welcome to see what they have done and what they propose to do.
In this talk I will give some classical fixed point theorems and present some of their applications. The talk will be of a "Chalk and Talk" style and will include some elegant classical proofs. The down side of this is that the listener will be expected to have some familiarity with metric spaces, convexity and hopefully Zorn's Lemma.
Various Vacation Scholars, HRD students and CARMA RAs will report on their work. This involves visualization and computation, practice and theory. Everyone is welcome to see what they have done and what they propose to do.