When: 10:00 am, Fri, 9th Dec 2011
Room: V206, Mathematics Building
Access Grid Venue:
Speaker: Stephen Simons, Department of Mathematics, University of California, Santa Barbara
Title: The asymmetric sandwich theorem
Abstract: We discuss the asymmetric sandwich theorem, a generalization of the Hahn–Banach theorem. As applications, we derive various results on the existence of linear functionals in functional analysis that include bivariate, trivariate and quadrivariate generalizations of the Fenchel duality theorem. We consider both results that use a simple boundedness hypothesis (as in Rockafellar’s version of the Fenchel duality theorem) and also results that use Baire’s theorem (as in the Robinson–Attouch–Brezis version of the Fenchel duality theorem).
When: 2:30 pm, Mon, 21st Nov 2011
Room: V206, Mathematics Building
Access Grid Venue: Andrew.Danson@newcastle.edu.au
Speaker: Markus Hegland, Mathematical Sciences Institute, Australian National University
Title: A finite element method for density estimation with Gaussian process priors
Abstract: Probability densities are a major tool in exploratory statistics and stochastic modelling. I will talk about a numerical technique for the estimation of a probability distribution from scattered data using exponential families and a maximum a-posteriori approach with Gaussian process priors. Using Cameron-Martin theory, it can be seen that density estimation leads to a nonlinear variational problem with a functional defined on a reproducing kernel Hilbert space. This functional is strictly convex. A dual problem based on Fenchel duality will also be given. The (original) problem is solved using a Newton-Galerkin method with damping for global convergence. In this talk I will discuss some theoretical results relating to the numerical solution of the variational problem and the results of some computational experiments. A major challenge is of course the curse of dimensionality which appears when high-dimensional probability distributions are estimated.
When: 2:30 pm, Tue, 8th Nov 2011
Room: V206, Mathematics Building
Access Grid Venue: Andrew.Danson@newcastle.edu.au
Speaker: Vladimir Ejov, School of Mathematics and Statistics, University of South Australia
Title: Perturbed Determinants, Spectral Theory and Longest Cycles on Graphs
Abstract: We interpret the Hamiltonian Cycle problem (HCP) as a an optimisation problem with the determinant objective function, naturally arising from the embedding of HCP into a Markov decision process. We also exhibit a characteristic structure of the class of all cubic graphs that stems from the spectral properties of their adjacency matrices and provide an analytic explanation of this structure.
When: 4:00 pm, Thu, 1st Sep 2011
Room: V206, Mathematics Building
Access Grid Venue: Andrew.Danson@newcastle.edu.au
Speaker: Dr Francisco Aragón Artacho, CARMA, The University of Newcastle
Title: Lipschitzian properties of a generalized proximal point algorithm
Abstract: Basically, a function is Lipschitz continuous if it has a bounded slope. This notion can be extended to set-valued maps in different ways. We will mainly focus on one of them: the so-called Aubin (or Lipschitz-like) property. We will employ this property to analyze the iterates generated by an iterative method known as the proximal point algorithm. Specifically, we consider a generalized version of this algorithm for solving a perturbed inclusion $$y \in T(x),$$ where $y$ is a perturbation element near 0 and $T$ is a set-valued mapping. We will analyze the behavior of the convergent iterates generated by the algorithm and we will show that they inherit the regularity properties of $T$, and vice versa. We analyze the cases when the mapping $T$ is metrically regular (the inverse map has the Aubin property) and strongly regular (the inverse is locally a Lipschitz function). We will not assume any type of monotonicity.
When: 3:00 pm, Tue, 16th Aug 2011
Room: V206, Mathematics Building
Access Grid Venue:
Speaker: Liangjin Yao, CARMA, The University of Newcastle
Title: For maximally monotone linear relations, dense type, negative-infimum type, and Fitzpatrick-Phelps type all coincide with monotonicity of the adjoint
Abstract: It is shown that, for maximally monotone linear relations defined on a general Banach space, the monotonicities of dense type, of negative-infimum type, and of Fitzpatrick-Phelps type are the same and equivalent to monotonicity of the adjoint. This result also provides affirmative answers to two problems: one posed by Phelps and Simons, and the other by Simons.
When: 4:00 pm, Thu, 11th Aug 2011
Room: V205, Mathematics Building
Access Grid Venue: Andrew.Danson@newcastle.edu.au
Speaker: Liangjin Yao, CARMA, The University of Newcastle
Title: The sum of a maximally monotone linear relation and the subdifferential of a proper lower semicontinuous convex function is maximally monotone
Abstract: The most important open problem in Monotone Operator Theory concerns the maximal monotonicity of the sum of two maximally monotone operators provided that Rockafellar's constraint qualification holds. In this talk, we prove the maximal monotonicity of the sum of a maximally monotone linear relation and the subdifferential of a proper lower semicontinuous convex function satisfying Rockafellar's constraint qualification. Moreover, we show that this sum operator is of type (FPV).
When: 11:00 am, Fri, 4th Mar 2011
Room: V206, Mathematics Building
Access Grid Venue: Andrew.Danson@newcastle.edu.au
Speaker: Qiji Jim Zhu, Department of Mathematics, Western Michigan University
Title: Why Bankers Should Learn Convex Analysis. (Part 2)
Abstract:

Concave utility functions and convex risk measures play crucial roles in economic and financial problems. The use of concave utility function can at least be traced back to Bernoulli when he posed and solved the St. Petersburg wager problem. They have been the prevailing way to characterize rational market participants for a long period of time until the 1970’s when Black and Scholes introduced the replicating portfolio pricing method and Cox and Ross developed the risk neutral measure pricing formula. For the past several decades the `new paradigm’ became the main stream. We will show that, in fact, the `new paradigm’ is a special case of the traditional utility maximization and its dual problem. Moreover, the convex analysis perspective also highlights that overlooking sensitivity analysis in the `new paradigm’ is one of the main reason that leads to the recent financial crisis. It is perhaps time again for bankers to learn convex analysis.

The talk will be divided into two parts. In the first part we layout a discrete model for financial markets. We explain the concept of arbitrage and the no arbitrage principle. This is followed by the important fundamental theorem of asset pricing in which the no arbitrage condition is characterized by the existence of martingale (risk neutral) measures. The proof of this gives us a first taste of the importance of convex analysis tools. We then discuss how to use utility functions and risk measures to characterize the preference of market agents. The second part of the talk focuses on the issue of pricing financial derivatives. We use simple models to illustrate the idea of the prevailing Black -Scholes replicating portfolio pricing method and related Cox-Ross risk-neutral pricing method for financial derivatives. Then, we show that the replicating portfolio pricing method is a special case of portfolio optimization and the risk neutral measure is a natural by-product of solving the dual problem. Taking the convex analysis perspective of these methods h

When: 11:00 am, Thu, 3rd Mar 2011
Room: V206, Mathematics Building
Access Grid Venue: Andrew.Danson@newcastle.edu.au
Speaker: Qiji Jim Zhu, Department of Mathematics, Western Michigan University
Title: Why Bankers Should Learn Convex Analysis (Part 1)
Abstract:

Concave utility functions and convex risk measures play crucial roles in economic and financial problems. The use of concave utility function can at least be traced back to Bernoulli when he posed and solved the St. Petersburg wager problem. They have been the prevailing way to characterize rational market participants for a long period of time until the 1970’s when Black and Scholes introduced the replicating portfolio pricing method and Cox and Ross developed the risk neutral measure pricing formula. For the past several decades the `new paradigm’ became the main stream. We will show that, in fact, the `new paradigm’ is a special case of the traditional utility maximization and its dual problem. Moreover, the convex analysis perspective also highlights that overlooking sensitivity analysis in the `new paradigm’ is one of the main reason that leads to the recent financial crisis. It is perhaps time again for bankers to learn convex analysis.

The talk will be divided into two parts. In the first part we layout a discrete model for financial markets. We explain the concept of arbitrage and the no arbitrage principle. This is followed by the important fundamental theorem of asset pricing in which the no arbitrage condition is characterized by the existence of martingale (risk neutral) measures. The proof of this gives us a first taste of the importance of convex analysis tools. We then discuss how to use utility functions and risk measures to characterize the preference of market agents. The second part of the talk focuses on the issue of pricing financial derivatives. We use simple models to illustrate the idea of the prevailing Black -Scholes replicating portfolio pricing method and related Cox-Ross risk-neutral pricing method for financial derivatives. Then, we show that the replicating portfolio pricing method is a special case of portfolio optimization and the risk neutral measure is a natural by-product of solving the dual problem. Taking the convex analysis perspective of these methods h