Posts Tagged ‘randomness’

January 23, 2012

3 × 3 × 3 × 3 × 3 × 3 × 3 × 3 × 3 × 3 × 3 × 3 × 3 … modulo 89

  • An application of group theory to Navy sonar — can you generate a sequence whose 1-differences are random? (whose 1-autocorrelations are nil)

  • Also the patterns look a bit like low-discrepancy sequences:
     

     

Hey! I made you some Wiener processes!

September 7, 2011

Check them out.

Here are thirty homoskedastic ones:

> homo.wiener <- array(0, c(100, 30))
> for (j in 1:30) {
  for (i in 2:length(homo.wiener)) {
          homo.wiener[i,j] <-  homo.wiener[ i - 1, j] + rnorm(1)
                     }}

> for (j in 1:30) {

       plot( homo.wiener[,j], 
          type = "l", col = rgb(.1,.1,.1,.6),
          ylab="", xlab="", ylim=c(-25,25)
            );
             par(new=TRUE)

 

Here’s just the meat of that wiener, in case the for loops or window dressing were confusing.

homo.wiener[i] <-  homo.wiener[ i - 1] + rnorm(1)

 

I also made you some heteroskedastic wieners.

> same for-loop encasing. ∀ j make wieners; ∀j plot wieners
> hetero.wiener[i] <- hetero.wiener[ i-1 ] + rnorm(1, sd=rpois(1,1) )




 

It wasn’t even that hard — here are some autoregressive(1) wieners as well.

> same for-loop encasing. j make wieners; ∀j plot wieners
> ar.wiener[i] <- ar.wiener[i-1]*.9 + rnorm(1)

 

Other types of wieners:

  • a.wiener[i-1] + rnorm(1) * a.wiener[i-1] + rnorm(1)
  • central.limit.wiener[i-1] + sum( runif(17, min=-1) )
  • cauchy.wiener[i-1] + rcauchy(1)      #leaping lizards!

     
  • random.eruption.wiener[i-1] + rnorm(1) * random.eruption.wiener[i-1] + rnorm(1)



     
  • non.markov.wiener[i-1] + non.markov.wiener[i-2] + rnorm(1)
  • the.wiener.that.never.forgets[i] <- cumsum( the.wiener.that.never.forgets) + rnorm(1)
  • non.wiener[i] <- rnorm(1)
     
  • moving.average.3.wiener[i] <- .6 * rnorm(n=1,sd=1) + .1 * rnorm(n=1,sd=50) + .3 * rnorm(n=1, mean=-3,sd=17)
  • 2d.wiener <- array(0, c(2, 100));
    ifelse( runif(1) > .5,
         2d.wiener[1,i] <- 2d.wiener[1,i-1] + rnorm(1)
                 && 2d.wiener[2,i] <- 2d.wiener[2,i-1],
         2d.wiener[2,i] <- 2d.wiener[2,i-1] + rnorm(1)
                 && 2d.wiener[ 1,i] <- 2d.wiener[1,i-1]


     
  • 131d.wiener <- array(0, c( 131, 100 )); ....
  • cross.pollinated.wiener
  • contrasting sd=1,2,3 of homo.wieners
     
 

What really stands out in writing about these wieners after playing around with them, is that logically interesting wieners don’t always make for visually interesting wieners.

There are lots of games you can play with these wieners. Some of my favourites are:

  • trying to make the wieners look like stock prices (I thought sqrt(rcauchy(1)) errors with a little autocorrelation looked pretty good)
  • trying to make them look like heart monitors

Also it’s pretty hard to tell which wieners are interesting just from looking at the codes above. I guess you will just have to go mess around with some wieners yourself. Some of them will surprise you and not do anything; that’s instructive as well. 

 

VOICE OF GOD: WHAT’S UP. I AM THAT I AM. I DECLARE THAT THE WORD ‘WIENER’ IS OBJECTIVELY FUNNY. THAT’S ALL FOR NOW. SEE YOU WEDNESDAY THE 17TH.

Autocorrelation

May 9, 2011

A “truly” random, uniform random, completely random sequence might look like

◯◯⨯◯⨯⨯⨯⨯◯◯⨯◯◯⨯⨯◯⨯◯◯⨯⨯◯⨯⨯◯⨯◯◯⨯◯
R code: > xooooo = sample( c("◯", "⨯") , 30, rep = T) 

like the flips of a fair coin. But there are other “random”s as well.

Biased

For example, biased random, like an unfair coin with 4/5 bias, might generate a sequence that looks like this:

◯◯◯◯⨯◯◯⨯◯◯⨯◯◯◯◯◯◯◯◯◯◯◯◯◯◯◯◯◯◯◯

R code: > xooooo = sample( c("◯","◯","◯","◯", "⨯") , 30, rep = T)

 

Self-Correlated

But there’s also autocorrelated, or serially correlated, randomness.

◯◯◯◯◯◯◯◯◯◯◯◯◯◯◯◯◯◯◯◯⨯⨯⨯◯◯◯◯◯◯◯

For example you feel fine ◯ 80% of the time and 20% you’re sick ⨯ — and of course the sick days are more likely to come one after another. Or 80% of the time you don’t smoke ◯ but then you buy a pack and all of a sudden you smoke ⨯⨯⨯ three days in a row. Once you’ve broken your resolve, you’re more likely to smoke again the next day.

 

Equation-wise, autocorrelation amounts to adding a self-lag term to the other explanatory variables (plus unexplained residual). Besides habit and viral invasion, autocorrelation brings many things under the penumbra of randomness:

  • income. The strong gets more, while the weak ones fade. If you made a lot of money at your previous job, your next employer will pay you more either to steal you away or simply because salary history determines compensation in HR’s formula.
  • unemployment. Jobless today, jobless tomorrow. Those who are unemployed for more than six months are even more likely to be unemployed for the long term. Also people who take care of their own kids as their job are likely to still be doing so next week and next year rather than working for a company.
  • likelihood of cancer. Back to the subject of smoking, your likelihood of getting cancer accumulates faster and faster the more you smoke. I’ve seen claims that there is a kink in the cumulative propensity to cancer rate above one pack / day.
  • stock prices. Stocks don’t just jump around in a Cauchy distribution, although maybe the daily change in stock price does. Daily change is a lag term  so that’s serial correlation.

Serial correlation or autocorrelation refers to things that bunch together. When it rains, it pours.

August 27, 2010

Which of these pictures come from a random normal distribution and which come from a mixed distribution?

plot(rnorm,-3,3);

mix <- function(x) {
rnorm(x)+rnorm(x-3)
}

plot(mix(x), -3,3);