# Industrial Mathematics: Case Studies in the Diffusion of Heat and Matter

In case of , i. This is not surprising: we are, after all, mapping a function to an element in. These are again zero in fact the Wright function also decays exponentially to zero for large negative arguments, cf lemma 2. However, this analysis does not show their difference in the degree of ill-posedness: even though both are severely ill-posed, their practical computational behavior can still be quite different, as we shall see below. This problem is also known as the lateral Cauchy problem in the literature.

In the case , it is known that the inverse problem is severely ill-posed [ 8 , 33 ]. To gain insight into the fractional case, we apply the Laplace transform. With being the Laplace transform in time, and noting the Laplace transform of the Caputo derivative [ 53 , lemma 2. Upon deforming the contour suitably, this formula will allow the development of an efficient numerical scheme for the sideways problem via quadrature rules [ ], provided that the lateral Cauchy data is available for all.

The expression 3. In other words, both the classical and fractional sideways problems are severely ill-posed in the sense of error estimates between the norms in the data and unknowns; but with a fixed frequency range, the behavior of the time fractional sideways problem can be much less ill-posed. Hence, anomalous diffusion mechanism does help substantially since much more effective reconstructions are possible in the fractional case.

Next we illustrate the point numerically. The numerical results for the sideways problem are given in figure 5. Similar transitions are observed for other terminal times. This might be related to the discrete setting, for which there is an inherent frequency cutoff. To give a more complete picture, we examine the singular value spectrum in figure 5 b.

Unlike the backward diffusion problem discussed earlier, the singular values are actually decaying only algebraically, even for , and then there might be a few tiny singular values contributing to the large condition number. Hence, in the discrete setting, even for , the problem is still nearly well-posed, despite the large apparent condition number, since a few tiny singular values with a distinct gap from the rest of the spectrum are harmless in most regularization techniques.

Figure 5. Pictorially, the forward map F is only located in the upper left corner and has a triangular structure, which reflects the casual or Volterra nature of the sideways problem for the fractional diffusion equation. We note that the causal structure should be utilized in developing reconstruction techniques, via, e. Hence, one feasible approach is to recover only the boundary condition over a smaller subinterval of the measurement time interval.

This idea underlies one popular engineering approach, the sequential function specification method [ 4 , 64 ]. Figure 6. The Jacobian map F for , , and 1, from the interval itself. The sideways problem for the classical diffusion has been extensively studied, and many efficient numerical methods have been developed and analyzed [ 8 , 11 , 24 , 25 ]. In the fractional case, however, there are only a few works on numerical schemes, mostly for one-dimensional problems, and there seems no theoretical study on stability etc.

Murio [ 76 , 77 ] developed several numerical schemes, e. Qian [ 85 ] discussed about the ill-posedness of the quarter plane formulation of the sideways problem using the Fourier analysis, based on which a mollifier method was proposed, with error estimates provided. In [ 87 ], the recovery of a nonlinear boundary condition from the lateral Cauchy data was studied using an integral equation approach, and a convergent fixed point iteration method was suggested.

Zheng and Wei [ ] proposed a mollification method for the quarter plane formulation of the sideways problem, by convoluting the fractional derivative with a smooth kernel, and derived error estimates for the approximation, under a prior bounds on the solution. The Cauchy problem of the time fractional diffusion has been numerically studied in [ ]. In particular, with the separation of variables, a Volterra integral equation reformulation of the problem was derived, from which the ill-posedness of the Cauchy problem follows directly.

All these works are concerned with the one-dimensional case, and the high dimensional case has not been studied. A third classical linear inverse problem for the diffusion equation is the inverse source problem, i. Clearly, one piece of boundary data or final time data alone is insufficient to uniquely determine a general source term, due to dimensional disparity. To restore the possible uniqueness, as usual, we look for only a space- or time-dependent component of the source term f.

With different combinations of the data and source term, we get several different and not equivalent formulations of the inverse source problems. Below we examine several of them briefly. Like before, we resort to the separation of variables. For the case of a space dependent only source term f x , the solution u to the forward problem is given by.

Hence the measured data is given by. By taking inner product with on both sides, we arrive at the following representation of the source term f in terms of the measured data g. By the complete monotonicity of the Mittag-Leffler function on the positive real axis [ 82 ], we deduce. Each frequency component differs from essentially by a factor , which amounts to two derivative loss in space. Actually one can show. This behavior is identical with that for the backward fractional diffusion. The statement holds also for the inverse source problem for the classical diffusion case.

This is not surprising, since with a space dependent source term f , the solution u to the forward problem can be split into the steady solution u s and the decaying solution u d , i. By the decay behavior of the solution u d , the steady state component u s is dominating, which amounts to a two spatial derivative loss.

This is fully confirmed by the numerical experiments, cf figure 7. In particular, for large terminal time T , the singular value spectra are almost identical for all fractional orders, decaying to zero at an algebraic rate, cf figure 7 b. Figure 7. Numerical results for the inverse source problem with final time data and a space dependent source term. Next we turn to the time dependent case, i.

Mathematically, the inverse problem even for the classical diffusion equation has not been completely analyzed. The inclusion of a nontrivial term q x is important since without this there is nonuniqueness. To see this, we take u to satisfy on with initial data and a homogeneous Neumann boundary condition. Then one solution satisfying is given by and , but another is and. Like previously, the solution u to 3.

By taking inner product with on both sides, we deduce. In the case of , the formula recovers the relation.

Intuitively, the term can only pick up the information for t close to the terminal time T , and for t away from T , the information is severely damped, especially for high frequency modes, which leads to the severely ill-posed nature of the inverse problem. In the fractional case, the forward map F from the unknown to the data is clearly compact, and thus the problem is still ill-posed.

However, the kernel is less smooth and decays much slower, and one might expect that the problem is less ill-posed than the canonical diffusion counterpart. To examine the point, we present the numerical results for the inverse problem in figure 8. Consequently, a few more modes of the source term might be recovered. In other words, due to a slower local decay of the exponential function , compared with the Mittag-Leffler function , cf figure 1 a , actually more frequency modes can be picked up by normal diffusion than the fractional counterpart, cf figure 8 a.

This indicates that with sufficiently accurate data, at a small time instance, the sideways problem for normal diffusion may allow recovering more modes, i. Figure 8. The singular value spectrum at two different terminal times for the inverse source problem with a final time data at terminal time T , and , an unknown time dependent component p t.

In practice, the accessible data can also be the flux data at the end point, e. By repeating the preceding argument, the data is related to the unknown p t by. In [ 88 , theorem 4. Along the same line of thought, under reasonable assumptions, one can deduce that. In other words, anomalous diffusion can mitigate the degree of ill-posedness for the inverse problem.

Further, the terminal time T does not affect the condition number to a large extent. Figure 9. Our discussions with the inverse source problems indicate that the observation remains valid in the time fractional diffusion case. In particular, although not presented, we note that the inverse source problem of recovering a space dependent component from the lateral Cauchy data is severely ill-posed for both fractional and normal diffusion. In the simplest case of a space dependent-only source term, it is mathematically equivalent to unique continuation, a well known example of severely ill-posed inverse problems.

The inverse source problems for the classical diffusion equation have been extensively studied; see e. Inverse source problems for FDEs have also been numerically studied. Zhang and Xu [ ] established the unique recovery of a space dependent source term in 3. This is achieved by an eigenfunction expansion and Laplace transform, and the uniqueness follows from a unique continuation principle of analytic functions.

Sakamoto and Yamamoto [ 89 ] discussed the inverse problem of determining a spatially varying function of the source term by final overdetermined data in multi-dimensional space, and established its well-posedness in the Hadamard sense except for a discrete set of values of the diffusion constant, using an analytic Fredholm theory. Very recently, Luchko et al [ 68 ] showed the uniqueness of recovering a nonlinear source term from the boundary measurement, and developed a numerical scheme of fixed point iteration type.

Aleroev et al [ 2 ] showed the uniqueness of recovering a space dependent source term from integral type observational data. Recently, there are many numerical studies on this class of inverse problems. In [ ], the numerical recovery of a spatially varying function of the source term from the final time data in a general domain was studied using a quasi-boundary value problem method; see also [ 98 , ] related studies.

Wang et al [ ] proposed to determine the space-dependent source term from the final time data in multi-dimension using a reproducing kernel Hilbert space method. Now we consider a nonlinear inverse coefficient problem for the time fractional diffusion equation: given the final time data , find the potential q in the model. In [ 38 ], an elegant fixed point method was developed, and the monotone convergence of the method was established. It can be adapted straightforwardly to the fractional case: given an initial guess q 0 , compute the update q k recursively by.

Since the strong maximum principle is still valid for the time fractional diffusion equation [ ], the scheme is monotonically convergent, under suitable conditions. As the terminal time , the problem recovers a steady-state problem, and the scheme amounts to twice numerical differentiation in space and converges within one iteration, provided that the data g is accurate enough.

Hence, it is natural to expect that the convergence of the scheme will depends crucially on the time T : the larger is the time T , the closer is the solution u to the steady state solution; and thus the faster is the convergence of the fixed point scheme. By lemma 2. To illustrate the point, we present in figure 10 some numerical results of reconstructing a discontinuous potential with being the characteristic function of the set S. In order to illustrate the convergence behavior of the fixed point scheme we take exact data. In the figure, e denotes the relative error. Numerically, one also observes the monotone convergence of the scheme.

Figure Numerical results, i. Generally, the recovery of a coefficient in FDEs has not been extensively studied. Cheng et al [ 14 ] established the unique recovery of the fractional order and the diffusion coefficient from the lateral boundary measurements. It represents one of the first mathematical works on invere problems for FDEs, and has inspired many follow-up works. Yamamoto and Zhang [ ] established conditional stability in determining a zeroth-order coefficient in a one-dimensional FDE with one half order Caputo derivative by a Carleman estimate.

Carleman estimates for time fractional diffusion were discussed in [ 13 , 62 , ]. Wang and Wu [ 97 ] studied the simultaneous recovery of two time varying coefficients, i. All these works are concerned with the theoretical analysis, ant there are even fewer works on the numerical analysis of related inverse problems.

Li et al [ 58 ] suggested an optimal perturbation algorithm for the simultaneous numerical recovery of the diffusion coefficient and fractional order in a one-dimensional time fractional FDE. In [ 50 ], the authors considered the identification of a potential term from the lateral flux data at one fixed time instance corresponding to a complete set of source terms, and established the unique determination for 'small' potentials. Further, a Newton type method was proposed in [ 50 ], and its convergence was shown.

Even though our discussions have focused on time fractional diffusion, which involves one single fractional derivative in time, it is also possible to consider equations where the time derivative involves multiple factional orders, i. One of the very first undetermined coefficient problems for PDEs was discussed in the paper by Jones [ 52 ] see also [ 8 , chapter 13].

This is to determine the coefficient a t from. In [ 52 ], Jones provided a complete analysis of the problem, by giving necessary and sufficient conditions for a unique solution as well as determining the exact level of ill-conditioning. The key step in the analysis is a change of variables and conversion of the problem to an equivalent integral equation formulation. Perhaps surprisingly, this approach involves the use of a fractional derivative as we now show. In addition, the function h t defined by. If we define and on and look at the space , then it was shown that any satisfies the inverse problem must also solve the integral equation.

The main result in [ 52 ] is that the operator has a unique fixed point on and indeed is monotone in the sense of preserving the partial order on , i. Given these developments, it might seem that a parallel construction for the time fractional diffusion counterpart, , would be relatively straightforward but this seems not to be the case. The basic steps for the parabolic version require items that just are not true in the fractional case, such as the product rule, and without these the above structure cannot be replicated or at least not without some further ingenuity.

Now we turn to differential equations involving a fractional derivative in space. There are several possible choices of a fractional derivative in space, e. In recent years, the use of the fractional Laplacian is especially popular in high-dimensional spaces, and admits a well-developed analytical theory.

We shall focus on the left-sided Djrbashian—Caputo fractional derivative , , and the one-dimensional case, and consider the following four inverse problems: inverse Sturm—Liouville problem, Cauchy problem for a fractional elliptic equation, backwards diffusion, and sideways problem. First we consider the following Sturm—Liouville problem on the unit interval : find and such that. A Sturm—Liouville problem of this form was considered by Djrbashian [ 19 , 22 ] in s to construct certain biorthogonal basis for spaces of analytic functions; see also [ 78 ].

Like before, with , it recovers the classical Sturm—Liouville problem. In the case of a general potential q , in the fractional case, little is known about the analytical properties of the eigenvalues and eigenfunctions. The corresponding eigenfunctions are given by. Using the exponential asymptotics on the Mittag-Leffler function in lemma 2.

## Case Studies and Films | Mathematical Institute

Hence, for any , there are only a finite number of real eigenvalues to 4. It is well known that eigenvalues contain valuable information about the boundary value problem. For example it is known that the sequence of Dirichlet eigenvalues can uniquely determine a potential q symmetric with respect to the point , and together with additional spectral information, one can uniquely determine a general potential q ; see [ 12 , 86 ] for an overview of results on the classical inverse Sturm—Liouville problem. In the fractional case, the eigenvalues are generally genuinely complex, and a complex number may carry more information than a real one.

Thus one naturally wonders whether these complex eigenvalues do contain more information about the potential. Numerically the answer is affirmative. To illustrate this, we show some numerical reconstructions in figure 11 , obtained by using a frozen Newton method and representing the sought-for potential q in Fourier series [ 50 ]. The Dirichlet eigenvalues can be computed efficiently using a Galerkin finite element method [ 45 ].

One observes that one single Dirichlet spectrum can uniquely determine a general potential q. Theoretically, the surprising uniqueness in the fractional case remains to be established. Numerical results for the inverse Sturm—Liouville problem with a Djrbashian—Caputo derivative for a and b. The reconstructions are computed from the first eight eigenvalues in absolute value using a frozen Newton method [ 50 ].

Like before, little is known about the analytical properties of the eigenvalues and eigenfunctions. Further, the asymptotics of the eigenvalues are still valid. The numerical results from the Dirichlet spectrum in the Riemann—Liouville case are shown in figure For a general potential q , the reconstruction represents only the symmetric part, which is drastically different from the Djrbashian—Caputo case, but identical with that for the classical Sturm—Liouville problem.

Further, if we assume that the potential q is known on the left half interval, then the Dirichlet spectrum allows uniquely reconstructing the potential q on the remaining half interval, cf figure 12 b. These results indicate that in the Riemann—Liouville case the complex spectrum is not more informative than the classical Sturm—Liouville problem. The precise mechanism underlying the fundamental differences between the Djrbashian—Caputo and Riemann—Liouville cases awaits further study.

Numerical results for the inverse Sturm—Liouville problem with a Riemann—Liouville fractional derivative of order. In general, the Sturm—Liouville problem with a fractional derivative remains completely elusive, and numerical methods such as finite element method [ 46 ] provide a valuable and often the only tool for studying its analytical properties. For a variant of the fractional Sturm—Liouville problem, which contains a fractional derivative in the lower-order term, Malamud [ 71 ] established the existence of a similarity transformation, analogous to the well-known Gel'fand—Levitan—Marchenko transformation, and also the unique recovery of the potential from multiple spectra.

In the classical case, the Gel'fand—Levitan—Marchenko transformation lends itself to a constructive algorithm [ 86 ]; however, it is unclear whether this is true in the fractional case. In [ 50 ], the authors proposed a Newton type method for reconstructing the potential, which numerically exhibits very good convergence behavior. However, a rigorous convergence analysis of the scheme is still missing. Further, the uniqueness and nonuniqueness issues of related inverse Sturm—Liouville problems are outstanding. Last, as noted above, there are other possible choices of the space fractional derivative, e.

It is unknown whether the preceding observations are valid for these alternative derivatives. One classical elliptic inverse problem is the Cauchy problem for the Laplace equation, which plays a fundamental role in the study of many elliptic inverse problems [ 40 ]. A first example was given by Jacques Hadamard [ 31 ] to illustrate the severe ill-posedness of the Cauchy problem, which motivated him to introduce the concept of well-posedness and ill-posedness for problems in mathematical physics.

So a natural question is whether the Cauchy problem for the fractional elliptic equation is also as ill-posed? To illustrate this, we consider the following fractional elliptic problem on the rectangular domain. With , it recovers the Cauchy problem for the Laplace equation. By applying the separation of variables, we can assume that , which directly gives for some scalar that. Let be a Dirichlet eigenpair of the Caputo derivative operator on the unit interval , i. With the choice and the Cauchy data pair , the component satisfies.

Using the relation [ 53 , p 46], we deduce that the solution to the fractional ordinary differential equation is given by. By the exponential asymptotics of the Mittag-Leffler function, cf lemma 2. They are often simple to construct and aesthetically appealing, yet remain topologically and mechanically quite complex. Knots are also common in biopolymers such as DNA and proteins, with significant and often detrimental effects, and biological mechanisms also exist for 'unknotting'. A group of Oxford Mathematicians, both researchers and undergraduates, have done just that.

How can solar panels become cheaper? Part of the cost is in the production of silicon, which is manufactured in electrode-heated furnaces through a reaction between carbon and naturally occurring quartz rock. Making these furnaces more efficient could lead to a reduction in the financial cost of silicon and everything made from it, including computer chips, textiles, and solar panels. Greater efficiency also means reduced pollution. Since that time the field of graph theory and network science has developed greatly and the problems we want to model have also changed.

How does the skin develop follicles and eventually sprout hair? Mathematics is delving in to ever-wider aspects of the physical world. As part of our series of research articles focusing on the rigour and intricacies of mathematics and its problems, Oxford Mathematician James Sparks discusses his latest work. New methods for localising radiation treatment of tumours depend on estimating the spatial distribution of oxygen in the tissue. Oxford Mathematicians hope to improve such estimates by predicting tumour oxygen distributions and radiotherapy response using high resolution images of real blood vessel networks.

Systemic risk, loosely defined, describes the risk that large parts of the financial system will collapse, leading to potentially far-reaching consequences both within and beyond the financial system. Such risks can materialize following shocks to relatively small parts of the financial system and then spread through various contagion channels. Assessing the systemic risk a bank poses to the system has thus become a central part of regulating its capital requirements. If nations are to grow, both economically and intellectually, they must foster scientific creativity.

To do that they must create scientific environments that stimulate collaboration. This is especially true of developing countries as they seek to prosper in a global economy. Social media for health promotion is a fast-moving, complex environment, teeming with messages and interactions among a diversity of users.

Mathematics is full of challenges that remain unanswered. The field of Number Theory is home to some of the most intense and fascinating work. Two Oxford mathematicians, Ben Green and Tom Sanders , have recently made an important breakthrough in an especially tantalising problem relating to arithmetic structure within the whole numbers. Many elastic structures have two possible equilibrium states.

For example umbrellas that become inverted in a sudden gust of wind, nanoelectromechanical switches, origami patterns and even the hopper popper, which jumps after being turned inside-out. Snap-through allows plants to gradually store elastic energy, before releasing it suddenly to generate rapid motions, as in the Venus flytrap. Claim extinction too late, and you may be taking resources away from a species that actually could be saved. Plants use many strategies to disperse their seeds, but among the most fascinating are exploding seed pods. Many of us know the feeling of standing in front of a subway map in a strange city, baffled by the multi-coloured web staring back at us and seemingly unable to plot a route from point A to point B.

Now, a team of physicists and mathematicians has attempted to quantify this confusion and find out whether there is a point at which navigating a route through a complex urban transport system exceeds our cognitive limits. People make a city. Each city is as unique as the combination of its inhabitants.

Currently, cities are generally categorised by size, but research by Oxford Mathematicians Peter Grindrod and Tamsin Lee on the social networks of different cities shows that City A, which is twice the size of City B, may not necessarily be accurately represented as an amalgamation of two City Bs. Glioblastoma is an aggressive form of brain tumour, which is characterised by life expectancies of less than 2 years from diagnosis and currently has no cure. The only intervention available to a patient is having the infected area of their brain cut away as soon as the tumour cells are observed.

Unfortunately, these technologies may have undesirable consequences for the electricity networks supplying our homes and businesses. The possible plethora of low carbon technologies, like electric vehicles, heat pumps and photovoltaics, will lead to increased pressure on the local electricity networks from larger and less predictable demands.

The motion of weights attached to a chain or string moving on a frictionless pulley is a classic problem of introductory physics used to understand the relationship between force and acceleration. In their recently published paper Oxford Mathematicians Dominic Vella and Alain Goriely and colleagues looked at the dynamics of the chain when one of the weights is removed and thus one end is pulled with constant acceleration.

The use of mathematical models to describe the motion of a variety of biological organisms has been the subject of much research interest for several decades. This picture shows the "Z" machine at Sandia Labs in New Mexico producing, for a tiny fraction of a second, TW of power - about times the average electricity consumption of the entire planet. How should such extreme behaviour be described mathematically? The International Congresses of Mathematicians ICMs take place every four years at different locations around the globe, and are the largest regular gatherings of mathematicians from all nations.

However, as much as the assembled mathematicians may like to pretend that these gatherings transcend politics, they have always been coloured by world events: the congresses prior to the Second World War saw friction between French and German mathematicians, for example, whilst Cold War political tensions likewise shaped the conduct of later congresses.

The concept of equilibrium is one of the most central ideas in economics. It is one of the core assumptions in the vast majority of economic models, including models used by policymakers on issues ranging from monetary policy to climate change, trade policy and the minimum wage. But is it a good assumption? For example, let us consider the case of a particle moving on the real line. Homogenization theory aims to understand the properties of materials with complicated microstructures, such as those arising from flaws in a manufacturing process or from randomly deposited impurities.

The goal is to identify an effective model that provides an accurate approximation of the original material. Oxford Mathematician Benjamin Fehrman discusses his research. The discomfort experienced when a kidney stone passes through the ureter is often compared to the pain of childbirth. Severe pain can indicate that the stone is too large to naturally dislodge, and surgical intervention may be required. A ureteroscope is inserted into the ureter passing first through the urethra and the bladder in a procedure called ureteroscopy.

Fusion energy may hold the key to a sustainable future of electricity production. However some technical stumbling blocks remain to be overcome. Even the sturdiest solid solutions suffer damage over time, which could be avoided by adding a thin liquid coating. In many natural systems, such as the climate, the flow of fluids, but also in the motion of certain celestial objects, we observe complicated, irregular, seemingly random behaviours.

These are often created by simple deterministic rules, and not by some vast complexity of the system or its inherent randomness. A typical feature of such chaotic systems is the high sensitivity of trajectories to the initial condition, which is also known in popular culture as the butterfly effect. Oxford Mathematician Riccardo W. Its study is at the interface of probability, number theory, analysis, and geometry. The applications to physics include the study of ocean waves, earthquakes, sound and other types of waves.

At the beginning of the twentieth century, some minor algebraic investigations grabbed the interest of a small group of American mathematicians. The problems they worked on had little impact at the time, but they may nevertheless have had a subtle effect on the way in which mathematics has been taught over the past century.

In a seminal paper, Alan Turing mathematically demonstrated that two reacting chemicals in a spatially uniform mixture could give rise to patterns due to molecular movement, or diffusion. This is a particularly striking result, as diffusion is considered to be a stabilizing mechanism, driving systems towards uniformity think of a drop of dye spreading in water. What does boiling water have in common with magnets and the horizon of black holes? They are all described by conformal field theories CFTs! We are used to physical systems that are invariant under translations and rotations.

Imagine a system which is also invariant under scale transformations. Such a system is described by a conformal field theory. Remarkably, many physical systems admit such a description and conformal field theory is ubiquitous in our current theoretical understanding of nature. Oxford Mathematician Tom Oliver talks about his research in to the rich mine of mathematical information that are L-functions.

Americans drink an average of 3. Those steeped in a discipline do not become engaged in the issues of control, contingency and the epistemology of the economy in the way that engineers do. By contrast, the practitioner of the transitory regime experiences two referents. There nevertheless exists a hierarchy where the disciplinary orientation is paramount, providing legitimacy.

Second, the epistemology of those engaged in the transitory regime is bifurcated and segmented. The epistemological components and their relations of utilitarian work are highly complex, contingent and changing. Technical endeavor, as all else is unstable. What counts as valid and outstanding on one day, is evaluated as unacceptable the following day issues of reliability or safety. In the disciplinary regime the epistemology of research is relatively standardized and stable. It resists local drama. Those engaged in the transitory path adhere totally to the epistemology of disciplinary requirements while working in that regime.

Are they intellectually ambidextrous? Based on the case of Kelvin and our own on-sight observation of contemporary practice, it appears that they may, to some degree, superimpose some transitory mental operations on a stronger, permanent disciplinary epistemological substrate when engaged in enterprise.

## Login using

They may mobilize selected components of utilitarian epistemology in order to address specific questions of possible entrepreneurial interest that they had dealt with or formulated in the course of earlier disciplinary research. On completion of their enterprise tasks, relevant practitioners travel back to their disciplinary homeland where they comfortably and entirely re-engage discipline epistemology. However, the essential question for a deep understanding of the specificity and the operations of the transitory regime is how do practitioners of the regime manage both to sustain a strong connection with disciplinarity and to move temporarily beyond the disciplinary base as they circulate into enterprise and then back toward the discipline?

What is the specific mechanism that underpins this sequential trajectory? Respiration is a dynamics which originates inside the disciplinary regime cf. It can be seen as a motor which promotes circulation of certain practitioners between the disciplinary regime and enterprise, and hence as a principle force that underpins the transitory regime. Respiration is the interval between closing one research project and entering into another, when practitioners take stock of the relevance of their past research work and instruments, and consider what fresh research questions tied to what novel instruments might now be possible.

It is time out for reflection about past accomplishments, whether they should be continued, or alternatively, what new paths might be embarked on. In many instances, respiration leads scientists to perpetuate or reformulate research inside their discipline. In other instances, however, a variety of considerations induce them to look beyond their discipline and to envisage participation in a precise entrepreneurial project cf. It is important to note that in the latter case, disciplinary practitioners' interest in engagement with industry is based on two considerations. First, they wish to express in concrete terms and explore in an alternative environment enterprise , the range of possibilities of their earlier disciplinary findings.

They may anticipate that by connecting their findings to existing commercial technologies, it may be possible to make the technologies more efficient or allow them to address new problems. Based on precise elements in recent research, the practitioners project the possibilities of their implantation in diversified terrains, and in so doing open their horizon. In effect, it is alternative expressions of extant work and curiosity that represents the faces of respiration. Second, linked to this, this propensity is strongly connected to the existence of a kind of curiosity which is of different nature than the one that propels them in their disciplinary work.

It is worth noting that, very surprisingly, the concept of curiosity is often strikingly absent from reflection on the operation of science and technology. Curiosity connected to disciplines is framed in terms of the understanding of self referencing physical objects and forces. In enterprise, curiosity ultimately focuses on applicability of laboratory disciplinary results to concrete situations and markets.

The relevance of our respiration model is that it allows understanding of the motives and mechanisms which are at work in the circulation between the two regimes. There are subsequent respirations in enterprise which most frequently induce practitioners to return to their discipline which remains their primary referent. This anchors scientists' work in cognition and disciplinary referents and excludes the idea of a mixed and undifferentiated configuration of the sort proposed by technoscience. In the case of the trajectory of Lord Kelvin, there are two episodes. One which entails respiration and that corresponds to the transitory regime and a second where respiration is absent and that does not adequately coincide with the transitory logic.

In the case of the telegraph example, Kelvin's participation was demand-driven, that is to say, it was prompted by a request coming from industry which demanded expertise, and was not an expression of Kelvin's earlier disciplinary efforts. The exogenous stimulus for work and its disconnectedness from discipline are the decisive feature. By contrast, Kelvin was engaged in a transitory episode when he designed and built his various metrology.

These were based in disciplinary efforts and were offered as gifts to enterprise which could develop them as appropriate. Each regime is the product of its particular historical circumstances, and this fundamental fact emerges with outstanding force in the case of the research-technology regime of science and technology production and diffusion.

It arose in Germany during the last third of the 19th century, a conjunction of military, governmental, industrial, instrument maker and to a lesser extent academic forces. Assertive Prussian ambitions and aggrandisement, the explosive growth of German industry and extension into new chemical, electrical, naval, and infrastructure domains, swift progress in science research, government determination to introduce and impose strong standards and norms on industrial products, and keen interest among some instrument makers, to compete internationally with the French and British and to transform the fundamental logic of their craft combined to forge a new regime of science production and diffusion cf.

German culture thus became the nexus for the rise of the research-technology regime. The foundational concept, at least among some government thinkers, military figures, captains of industry, and above all Berlin instrument makers, was the generation of an absolutely novel form of technology capable of addressing a diversity of applications in a broad range of disciplinary and industrial domains.

The goal was in fact to establish an original epistemological matrix. Rather than deliberating on the laws of nature, the new regime instead proposed to explore the laws of instrumentation. Mastery of the laws of instrumentation could in turn lead to development of generic devices. A generic device would express fundamental principles of instrumentation that could subsequently be integrated into specific technological functions and tasks through proper adaptation.

A generic instrument would thus, according to an extended group of Berlin instrument making firms and then firms in other German cities, embed basic very general instrument concepts that would allow for open-ended flexibility, and multi-functionality. The generic principle would permit aspects of the device to be effectively re-designed for local niche application without disorganizing the technological logic and division of labor within the variety of environments in which it operates. Adoption through adaptation through re-embedding of generic instrument laws comprised the underlying logic.

A range of small Berlin companies became committed to this project in the s, s and s, most active being the Hench Company. Government policy insisted on its institutionalization and spread. A huge compendium by Leopold Loewenhertz published in pressed home the need to generate generic devices, which could subsequently lie at the heart of convergence between many technologies and diverse domains of science research cf.

Loewenhertz, This new sphere, labelled "research-technology" began to be perceived as a transverse mechanism for extending technical and science work and for introducing order into what was increasingly viewed as a fragmented arena of learning, skills and technology.

Something had to be undertaken to introduce convergence, and research-technology's generic instrument artifacts was viewed as one such key mechanism cf. Shinn, , a, b. In effect, research-technology comprised one antidote against excess mental and material segmentation. Instances of generic instruments from the late 19th century through the 20th century include, for example, the stereoscope of Carl Pulfrich This device incorporated unique three dimensional-producing optical arrangements. The generic three-dimensional optics was quickly adapted by users for undertakings in naval gunnery, precision diagnosis of problems in architecture, in the study of historic sculpture, in topography and infrastructure work railway and road construction.

Another instance of generic research technology took the form of automatic switching generic principles and artifacts that were used in astronomy research, in the chemical industry and in electrical power regulation. More recent examples include development of the Fourier transform spectroscope by Pierre Jacquinot, Janine and Pierre Connes and Peter Fellgett, the rumbatron by William Henson, the oscilloscope and the laser.

In research-technology, genericity sometimes also surpasses purely material artifacts. Genericity can cover non-material purely mental technological apparatus as well. Simulation counts as a contemporary generic device cf. Lenhart et al. Cybernetics too is considered by some to comprise a generic conceptual instrument. The concrete instance of how German instrument makers organized their apparatus helps illustrate the logic that lays behind their generic philosophy.

In the material organization of traditional instrument exhibitions, Germany instrument makers, like those of other countries, exhibited their innovations side by side, with no regard to their underlying logic. Electrical devices were arranged together with other electrical apparatus, and the same held for optical, mechanical etc. This suddenly changed among Berlin instrument specialists in the s, when for the first time generic principles constituted expository practice.

A generic instrument law that could find expression in optics, magnetism, and electricity systematically grouped products of all sorts relevant to the underlying instrument law. In this fashion, attention was immediately drawn to the underlying principle and to the myriad adaptations that it could express. In so doing, research-technology emphasized the transverse commonality of what otherwise superficially appeared as fragmented, differentiated forms of knowledge and technology. Through such a redistribution of devices, the federative, or at least the confederative character of science and technology, became visible.

This transverse logic was particularly noteworthy in the Saint Louis Universal Exhibition, where many observers took note of the new logic that stood behind the organization of artifacts, and thereby behind science and technology cf. Two additional events occurring in the s reveal the specific dynamics of research-technology.

The various sections were highly defined, membership depended on training, cognitive domains and profession. The groups were distinct and were jealous of their separateness and autonomy. The Versammlung indeed acted as an association, having no confederal or federative ambitions. Beginning in the mid s German research-technologists mounted a vigorous campaign to become part of the association. The effort was at first stiffly resisted. Opposition focused against the plan of research-technologists to introduce a transverse section.

Generic instrumentation was intended to straddle the particularities of the other standing Versammlung groups. This was initially perceived as a threat to the traditional autonomy of the historical sections. Nevertheless, by generic instrumentation makers at last managed to be admitted as a kind of semi-recognized renegade section. It never achieved full membership, due to its insistence on a transversalist theme and strategy - an approach to science and to technology intended to form a bridge between sub-groups and to promote systematic circulation of ideas, materials and men across all regimes of science and technology production and diffusion.

Research-technology also figured in the development of the Physikalisch-Technische Reichanstalt PRT established in The PTR entailed two sections, one for science and another for technology-related endeavor cf. Cahan, While the science section, directed by Herman von Helmholtz , was devoted to fundamental research, the orientation of the technology body remained ill defined.

One possibility would involve the introduction and implementation of industry standards and norms. A second option focused on engineering, and more specifically on research associated with engineering education. This path was supported by the mighty German engineering lobby. Research-technology comprised a third area. The goal here would be research on generic devices testing and their dissemination. The aforementioned champion of research-technology, Leopold Loewenhertz was the principal advocate of this line of action.

To the surprise of many, it was generic instrument research that prevailed. Loewenhertz became the PTR's technology sections leader for a brief period, after which the institution's technology research tended to become less clear-cut in direction and even to fade. Despite this, for a short moment the research-technology trajectory advocated and practiced by generic instrumentation practitioners held sway and demonstrated its strength. As will now be shown, when taken together, the trajectory, forms of circulation and synergy, interstitial arena and boundary crossing format constitute signatures of research-technology practitioners, and this signature contrasts singularly with the characteristics of the previously described three other regimes.

The production of generic, open-ended, multi-function, multi-purpose and highly flexible artifacts requires operating out of an interstitial arena. Research-technologists work in the open, unoccupied spaces between dominant institutions and organizations - the university, industry, military, state metrology services and the like. At various junctures in their career, they sometimes develop connections with a particular organization, yet subsequently move back to the interstitial arena.

This arena provides several key features to research-technology. First, it protects them against short-term demand from clients requiring specific devices to resolve well-defined particular problems. Stated differently, the research-technologist here enjoys a temporal space relatively free from immediate exogenous constraint where he can focus on the underlying principles of instrumentation, as opposed to simply designing or building an apparatus that fits a narrow need. He who works for everyone is the bondsman of no one.

Second, the interstitial arena facilitates abundant boundary crossing opportunities. Research-technologists cross boundaries as they temporarily pass into local niche domains when collecting technical information or looking for problem categories which might be useful in generating a generic device. They likewise engage in boundary crossing when sometimes assisting local users adapt a generic apparatus, or helping extract particular appropriate components, in the complex process of generic instrument adoption. Reverse boundary crossing also occurs when local niche users themselves move out of their habitual organizational industrial, academic etc.

Research-technology is through such countless boundary crossing and reverse boundary crossing often highly synergistic. Circulation is of foremost significance in this regime. At this juncture it is important to distinguish between the research-technology regime and the practitioners of the transitory regime who are also involved in boundary crossing. The latter shift between the disciplinary referent and the utilitarian referent to the extent that they operate with reference to enterprise or other beyond discipline organizations and interests.

Such boundary crossing, however, occurs infrequently in the case of the transitory regime, as scientists usually only traverse two or three times over a career. This contrasts with research-technologists who routinely move across frontiers, doing so countless times. So in one case boundary crossing remains an exceptional activity, while in the other case, it is normative and abundant.

Another foundational difference is that practitioners of the transitory regime are wed to their discipline. Their discipline constitutes the hub from which they operate. It provides identity and legitimacy.

With research-technologists, the primary identity and referent is at all times instrumentation and instrument-related endeavors. Genericity and the principles of instrumentation comprise their yardstick of achievement rather than the laws of nature and disciplinary distinctions. The research-technology regime is singular to the extent to which it fosters the circulation of practitioners, materials and ideas across boundaries within science, and between science and other forms of social action.

### 1. Introduction

Through generic instrumentation, communication occurs within academia, and between science, industry, state services, the military and beyond. Research-technology spawns a kind of lingua franca. Specific vocabularies, metrologies and images are embedded within a generic device. As the generic instrument becomes re-embedded in a local user niche, part of that particular set of representations is transferred into the local environment and becomes part of users' habitus. Instrument operators from a multitude of diverse domains thus appropriate, through integrating the language of the generic vector, a minimal shared language.

The common language enables actors from different horizons to communicate and interact effectively independently of their origins and setting. In this way, research-technology functions as a mechanism that promotes convergence. Research-technology thus partly neutralizes the fragmentation often associated with the contemporary multiplication of sub-groups, sub-functions, and an enhanced societal division of labor. This lingua franca is foundational to the linkage capacity that makes this regime consciously transverse.

The regime sustains the efficiencies commensurate with differentiation, and at the same time generates strong association. One perceives here that differentiation and interaction are not necessarily contradictory. Research-technology emphasizes and structures the complementarity between differentiation and forms of integration. By serving as a cross road, it generates and amplifies synergy between domains.

The research-technology regime affords an additional element of cohesion, this one based in the practices of instrument operation. As large numbers of generic-device-based apparatus are successfully used by different groups of scientists, engineers, technicians and other operators in vastly different environments, and performing contrasting functions for alternative purposes, confidence in the results yielded by their apparatus develops and strengthens.

The sole commonality between the various expressions of the different devices is their generic components and principles. Shared confidence leads to shared belief, itself grounded on the regularity and reliability of instrument output. This instrument output is independent of user, use, function, geography and culture.

The generic-ground system produces a form of robustness within science. Through shared experience of operating devices and obtaining comparable findings, practitioners perceive their apparatus as yielding "valid" results. This validation takes on the form of "universality". However the universality born of research technology is not solely the stuff of epistemology. The practical universality of research-technology generic instrumentation contains a social component, rooted in shared social experience by heterogeneous groups.

Practical universality is hence partly sociological. It contains elements of communication and collective dynamics and interactions. It also entails a material component, since the robustness of practical universality requires reliable, comparable and standardized instrument products. This triangle of reliability, comparability and standardization is the product of instrument genericity. If one's criteria for the "unity of science" is an unblemished homogeneous whole, a unity theory of science is inconceivable on historical, institutional, organizational, epistemological and social grounds.

The above analysis of the emergence and dynamics of the disciplinary, utilitarian, transitory and the research-technology regimes of science and technology production and diffusion demonstrate the plural aspects of science. Based on structure, output and history, one is compelled to think of "science" simultaneously in terms of a whole and of the "sciences".

Each expression of science as a particular regime operates within a specific territory possessing its own form of symbolic and material capital, its characteristic configurations of conflict with their specific rules for judging what counts as a valid or unacceptable output, and distinguished by a highly defined market for its productions.

There are hence multiple forms of science, where the corpus of circulation and dynamics of circulation function differently. Each expression of science delimits its particular territory. The question nevertheless remains whether it is reasonable to speak in terms of "science". If one may speak of science in the singular, what legitimates this representation?

The sociologist Andrew Abbott stresses that boundaries serve principally to identify differences between entities. The social operation of boundary is not to defend or protect, but instead to demarcate differentiations. It is fully justified here to think in terms of an intertwined, transverse science which is demarcated from all other spheres of social activity - art, enterprise, law, government and so forth.

Science may better be likened to crystalline structure. The crystal's atomic lattice is periodically aligned, and the crystal entails its internal regularities and characteristics that distinguish it from other crystals and from other forms of matter. Crystals also frequently possess local defects which alter their local geometry. While the crystal remains a differentiated entity, it nevertheless exhibits specific local variations.

A form of intertwined, transverse structure of science, despite its pluralistic features, may be upheld on a second register. The research-technology regime provides apparatus that introduces convergence and coherence between science's other regimes. Generic apparatus, like mathematics, offers data, results, a way of seeing, and intelligibility that transverse boundaries cf. Shinn, a; Bourdieu, Generic apparatus also promote the circulation of practitioners among the many territories comprising science. If science is viewed as composed of territories, generic devices federate these vast territories, providing them with a common language in the guise of instrument-based lingua franca , and through shared practitioner expectations, experiences and results providing even a form of practical universality.

The historical, material, experimental and psychological robustness of the generic factor connects the materials, concepts, predictive capacity and solidity of science. The very transverse aspect of science comprises one of its salient strengths, as there exists a measure of complementarity between its several regimes. The intertwined, transverse territories of science are visible in the growing circulation between its components in the form of cross borderland regime movement. Finally and surprisingly, genericity and the lingua franca of the research technology regime contribute to transversality in a second and rather unexpected fashion by re-enforcing the stability of the other regimes.

This adaptation must be considered as a strong element in the consolidation of each territory of science, and disciplines. The circulation and utilisation of a generic instrument contributes in that sense to strengthen the identity and particularity of their users. So, by circulating through the different disciplines, a generic instrument, at the same time, creates the possibility of a link and of exchanges between the different practitioners, and by its very utilisation and adaptation, it contributes to redefine and consolidate the borders between the disciplines.

Indeed genericity and universality contribute to keep the frontiers between territories relevant and even necessary for the sake of these different territories of research and of the consolidation of their very referents. The same reflection can be made about the lingua franca. As scientists of different disciplines speak together across the borders of their territory, they adapt their language and try to find common concepts and representations of reality so that they can communicate and perhaps work together.

When meeting at the borderland and trying to organize a project together, scientists are taken in a double movement: they assume the use of common language between them in order to communicate, but so doing they continue to keep their own discipline referents. The lingua franca must be seen in that sense, as the pidgin that makes communication possible between groups that are strangers to one another, and at the same time as a factor that contributes to reinforcing one's own language, culture and identity where one feels "at home" in a familiar terrain of research.

More generally speaking, one can say that what seems to blur the borders between disciplines, also contributes in fact to maintain the different frontiers between them, between science and technology, and enhances the relevance of each different regime, and the inadequacy of the idea of the emergence of a so called technoscience.

Abbott, A. Things of boundaries. Social Research , 62, p. Chaos of disciplines. Chicago: University of Chicago Press, Abir-Am, P. From multidisciplinary collaborations to trans-national objectivity: international space as constitutive of molecular biology, In: Crawford, E. Denationalizing science: the contexts of international scientific practice. Dortrecht: Kluwer Academic Publishers, Auger, J. Annals of Science , 61, 3, p. In: Birck, F. Paris: Editions de la Maison des Sciences de l'Homme, Bechtel, W.

Integrating sciences by creating new disciplines: the case of cell biology. Biology and Philosophy , 8, 3, p. Bellacasa, M. Matters of care in technoscience: assembling neglected things. Social Studies of Science , 41, 1, p. Ben David, J. Roles and innovations in medicine. American Journal of Sociology , 45, p. Bensaude-Vincent, B.

### Special order items

The construction of a discipline: materials science in the U. Historical Studies in the Physical and Biological Sciences , 31, 2, p. Les vertiges de la technoscience. Birck, F. Bourdieu, P. Paris: Raisons d'agir, Brown, N. Contested futures: a sociology of prospective techno-science. Paris: Lavoisier Librairie, Cahan, D. An institute for an empire: the Pysikalisch-Technische Reichsanstalt, Cambridge: Harvard University Press, Carrier, M.

Science in the context of application: methodological change, conceptual transformation, cultural reorientation. Dordrecht: Springer, Crawford, E. Cambrosio, A. Arguing with images. Pauling's theory of antibody formation. In: Pauwels, L. Visual cultures of science: rethinking representational practices in knowledge building and science communication. Dartmouth: Dartmouth College Press, Clain, O. Crosland, M. Science under control: the French Academy of Science Cambridge: Cambridge University Press, Fox, R. Education, technology and industrial performance, Galison, P.

Image and logic. A material culture of microphysics. Gibbons, J. The new production of knowledge: the dynamics of science and research in contemporary societies. London: Sage, Hacking I. Representing and intervening. Introductory topics in the philosophy of natural sciences. Hayles, K. Implications of the new technoscience. Bristol: Intellect Books, Heilbron, J. The rise of social science disciplines in France.

Hoddeson, L.