<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.sarg.dev/index.php?action=history&amp;feed=atom&amp;title=Random_optimization</id>
	<title>Random optimization - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.sarg.dev/index.php?action=history&amp;feed=atom&amp;title=Random_optimization"/>
	<link rel="alternate" type="text/html" href="https://wiki.sarg.dev/index.php?title=Random_optimization&amp;action=history"/>
	<updated>2026-04-05T18:06:39Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.44.2</generator>
	<entry>
		<id>https://wiki.sarg.dev/index.php?title=Random_optimization&amp;diff=108113&amp;oldid=prev</id>
		<title>imported&gt;Ranjan Nahata: /* top */Fixed grammatical error</title>
		<link rel="alternate" type="text/html" href="https://wiki.sarg.dev/index.php?title=Random_optimization&amp;diff=108113&amp;oldid=prev"/>
		<updated>2025-06-12T07:37:09Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;top: &lt;/span&gt;Fixed grammatical error&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{Short description|Optimization technique in mathematics}}&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Random optimization (RO)&amp;#039;&amp;#039;&amp;#039; is a family of numerical [[Optimization (mathematics)|optimization]] methods [[Derivative-free optimization|that do not require the gradient]] of the optimization problem and RO can hence be used on functions that are not [[Continuous function|continuous]] or [[Differentiable function|differentiable]]. Such optimization methods are also known as direct-search, derivative-free, or black-box methods.&lt;br /&gt;
&lt;br /&gt;
The name random optimization is attributed to Matyas &amp;lt;ref name=matyas65random/&amp;gt; who made an early presentation of RO along with basic mathematical analysis. RO works by iteratively moving to better positions in the search-space which are sampled using e.g. a [[normal distribution]] surrounding the current position.&lt;br /&gt;
&lt;br /&gt;
== Algorithm ==&lt;br /&gt;
{{See also|Simulated annealing}}&lt;br /&gt;
Let &amp;lt;math&amp;gt;f: \mathbb{R}^{n} \rarr \mathbb{R}&amp;lt;/math&amp;gt; be the fitness or cost function which must be minimized. Let &amp;lt;math&amp;gt;x \isin \mathbb{R}^{n}&amp;lt;/math&amp;gt; designate a position or candidate solution in the search-space. The basic RO algorithm can then be described as:&lt;br /&gt;
&lt;br /&gt;
* Initialize &amp;#039;&amp;#039;&amp;#039;x&amp;#039;&amp;#039;&amp;#039; with a random position in the search-space.&lt;br /&gt;
* Until a termination criterion is met (e.g. number of iterations performed, or adequate fitness reached), repeat the following:&lt;br /&gt;
** Sample a new position &amp;#039;&amp;#039;&amp;#039;y&amp;#039;&amp;#039;&amp;#039; by adding a [[normal distribution|normally distributed]] random vector to the current position &amp;#039;&amp;#039;&amp;#039;x&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
** If (&amp;#039;&amp;#039;f&amp;#039;&amp;#039;(&amp;#039;&amp;#039;&amp;#039;y&amp;#039;&amp;#039;&amp;#039;)&amp;amp;nbsp;&amp;lt;&amp;amp;nbsp;&amp;#039;&amp;#039;f&amp;#039;&amp;#039;(&amp;#039;&amp;#039;&amp;#039;x&amp;#039;&amp;#039;&amp;#039;)) then move to the new position by setting &amp;#039;&amp;#039;&amp;#039;x&amp;#039;&amp;#039;&amp;#039;&amp;amp;nbsp;=&amp;amp;nbsp;&amp;#039;&amp;#039;&amp;#039;y&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* Now &amp;#039;&amp;#039;&amp;#039;x&amp;#039;&amp;#039;&amp;#039; holds the best-found position.&lt;br /&gt;
&lt;br /&gt;
This algorithm corresponds to a (1+1) [[evolution strategy]] with constant step-size.&lt;br /&gt;
&lt;br /&gt;
== Convergence and variants ==&lt;br /&gt;
Matyas showed the basic form of RO converges to the optimum of a simple [[unimodal function]] by using a [[Limit (mathematics)|limit-proof]] which shows convergence to the optimum is certain to occur if a potentially infinite number of iterations are performed. However, this proof is not useful in practice because a finite number of iterations can only be executed. In fact, such a theoretical limit-proof will also show that purely random sampling of the search-space will inevitably yield a sample arbitrarily close to the optimum.&lt;br /&gt;
&lt;br /&gt;
Mathematical analyses are also conducted by Baba &amp;lt;ref name=baba81convergence/&amp;gt; and Solis and Wets &amp;lt;ref name=solis81random/&amp;gt; to establish that convergence to a region surrounding the optimum is inevitable under some mild conditions for RO variants using other [[probability distribution]]s for the sampling. An estimate on the number of iterations required to approach the optimum is derived by Dorea.&amp;lt;ref name=dorea83expected/&amp;gt; These analyses are criticized through empirical experiments by Sarma &amp;lt;ref name=sarma90convergence/&amp;gt; who used the optimizer variants of Baba and Dorea on two real-world problems, showing the optimum to be approached very slowly and moreover that the methods were actually unable to locate a solution of adequate fitness, unless the process was started sufficiently close to the optimum to begin with.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Random search]] is a closely related family of optimization methods which sample from a [[hypersphere]] instead of a normal distribution.&lt;br /&gt;
* [[Luus–Jaakola]] is a closely related optimization method using a [[Uniform distribution (continuous)|uniform distribution]] in its sampling and a simple formula for exponentially decreasing the sampling range.&lt;br /&gt;
* [[Pattern search (optimization)|Pattern search]] takes steps along the axes of the search-space using exponentially decreasing step sizes.&lt;br /&gt;
* [[Stochastic optimization]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
{{reflist|refs=&lt;br /&gt;
&amp;lt;ref name=matyas65random&amp;gt;&lt;br /&gt;
{{cite journal&lt;br /&gt;
|last=Matyas&lt;br /&gt;
|first=J.&lt;br /&gt;
|title=Random optimization&lt;br /&gt;
|journal=Automation and Remote Control&lt;br /&gt;
|year=1965&lt;br /&gt;
|volume=26&lt;br /&gt;
|number=2&lt;br /&gt;
|pages=246–253&lt;br /&gt;
|url=http://www.mathnet.ru/eng/at11288&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ref name=baba81convergence&amp;gt;&lt;br /&gt;
{{cite journal&lt;br /&gt;
|last=Baba&lt;br /&gt;
|first=N.&lt;br /&gt;
|title=Convergence of a random optimization method for constrained optimization problems&lt;br /&gt;
|journal=Journal of Optimization Theory and Applications&lt;br /&gt;
|year=1981&lt;br /&gt;
|volume=33&lt;br /&gt;
|number=4&lt;br /&gt;
|pages=451–461&lt;br /&gt;
|doi=10.1007/bf00935752&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ref name=solis81random&amp;gt;&lt;br /&gt;
{{cite journal&lt;br /&gt;
| last1=Solis | first1=Francisco J.&lt;br /&gt;
| last2=Wets | first2=Roger J.-B. | authorlink2=Roger J-B Wets&lt;br /&gt;
| title=Minimization by random search techniques&lt;br /&gt;
| journal=[[Mathematics of Operations Research]]&lt;br /&gt;
| year=1981&lt;br /&gt;
| volume=6&lt;br /&gt;
| number=1&lt;br /&gt;
| pages=19–30&lt;br /&gt;
| doi=10.1287/moor.6.1.19}}&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ref name=dorea83expected&amp;gt;&lt;br /&gt;
{{cite journal&lt;br /&gt;
|last1=Dorea&lt;br /&gt;
|first1=C.C.Y.&lt;br /&gt;
|title=Expected number of steps of a random optimization method&lt;br /&gt;
|journal=Journal of Optimization Theory and Applications&lt;br /&gt;
|year=1983&lt;br /&gt;
|volume=39&lt;br /&gt;
|number=3&lt;br /&gt;
|pages=165–171&lt;br /&gt;
|doi=10.1007/bf00934526&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ref name=sarma90convergence&amp;gt;&lt;br /&gt;
{{cite journal&lt;br /&gt;
|last1=Sarma&lt;br /&gt;
|first1=M.S.&lt;br /&gt;
|title=On the convergence of the Baba and Dorea random optimization methods&lt;br /&gt;
|journal=Journal of Optimization Theory and Applications&lt;br /&gt;
|year=1990&lt;br /&gt;
|volume=66&lt;br /&gt;
|number=2&lt;br /&gt;
|pages=337–343&lt;br /&gt;
|doi=10.1007/bf00939542&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Major subfields of optimization}}&lt;br /&gt;
&lt;br /&gt;
{{DEFAULTSORT:Random Optimization}}&lt;br /&gt;
[[Category:Optimization algorithms and methods]]&lt;/div&gt;</summary>
		<author><name>imported&gt;Ranjan Nahata</name></author>
	</entry>
</feed>