logo

Theta in statistics

Share on facebook
Share on twitter

Introduction to Bayesian statistics, part 1: The basic concepts

302 539 views | 3 Mar. 2016

An introduction to the

An introduction to the concepts of Bayesian analysis using Stata 14. We use a coin toss experiment to demonstrate the idea of prior probability, likelihood functions, posterior probabilities, posterior means and probabilities as well as credible intervals. Copyright 2011-2019 StataCorp LLC. All rights reserved.

Rajat Nag

Excellent video, you just saved my life.

E Zh

Thank you very much for the explanations of non-informative prior and informative prior. Very helpful for my research.

Mike Silva

what tHE BLEEP did he just say?

Menghao Li

1:25 Why does this mean? Prior = Beta (1.0, 1.0)

Rizwan Niaz

how to calculate odd ratio in bayesian ordered logistic plz tell me

Pankaj Verma

Thank you for your kind help.

Wild Rain

Thanks. Perhaps you do another video to call it part 0 as the building blocks for this part 1. Introduction that is :)

Jenny apl

Posterior is proportional to the MLE x prior , not equal =

Brynner Hidalgo

why is the posterior narrower at 5:15?

Marketa Svehlakova

Thank you. That was very clear and helpful.

Jinu Daniel

so fucking fast..

Liviu Florescu

At 1:40, shouldn't the area under the graph be equal to 1? What does the y-axis represent?

SuperDayv

This is the best introduction to this that I've found online! Thanks!

editor ijsmi

Would like to share the details of the following books
Bayesian Methodology: An overview with the help of R software
https://www.amazon.com/dp/B07QCHTR54 - E-book
https://www.amazon.com/dp/109293989X - Paper back
ISBN-13: 978-1092939898
International Journal of Statistics and Medical Informatics
www.ijsmi.com/book.php

Chris X

One question I would have on this, is how can you be sure you are not biasing your result using these informative priors? I believe the most conservative approach is indeed the uniform (equivalent to I don't know anything so everything is equally possible for me), but when I start getting "clever", choosing appropriate priors, I can't make a real hypothesis test with that because I already tell the coin to be 50:50 (while someone could have potentially given me a magic coin of 10:90).

Hitendra Parmar

Maybe the video creator intended to explain Bayesian statistics, but did not.

The concepts start to be explained, then there is a stepwise jump into mentioning prior and posterior probability, with the introduction of on screen equations but no further explanations - it's like it was read out of a technical manual that only 'insiders' know about. This then quickly turns into how to use the software/which buttons to press, which seems applicable to those who already know about Bayes and want to use the software - and not for those who want an introduction.


So I'm sorry to say this video was not useful to introduce Bayesian statistics and I would recommend giving it a miss.

JaneFord

Hi, thanks for the video. What I wonder is, what are " default priors" when it comes to bayesian inference? As I understand, the priors are specific to each hypothesis or data, so how come some packages include these defaults? What do these priors entail?

Nyambaatar Batbayar

I have the same version of Stata as yours. However, my Bayesmh window doesn't have the "univariate distribution" option. What could be the reason? Can you give me a hint?

Jack

There's no information about what the Y in the graph is/refers to. This is unacceptable

Ahmed Moneim

That was excellent explanation of the interaction between the parameters, thank a lot for putting the time and effort to do the animations

epicwhat001

this is Advance basic concept.

BigFish Artwire

Finally I understand this thing. Thank you.

Solid Answers

Awesome, thank you! Animations are really helpful.

Doug Sinclair

What is the application tool you were using? Is it publicly available?
I am really struggling with a layered Baysian theoretical puzzle and not finding a clear path forwards even after watching several videos.
Each helps me a little more and yours, while more technical, was very helpful.
Thanks!
Thanks,
Doug

Alex Renouf

i understand nothing

ohmy FLY

.75x speed

Jehan Gonsal

This is awesome. So intuitive and interesting. Why did we ever use null hypothesis testing? With the computational power we have now, this should be the norm.

Yanchen

great vid! so informative

gerwyn jones

Isn't there an error at 5:18
Shouldn't the beta distribution's a and b be 86 and 84 NOT 106 and 114 ???? as the mean of 86 and 84 gives the mean on the screen (0.506) ......
Whereas the mean of the beta(106,114) is 0.481

André Neves

Please, could you send us the video transcript?

sujith thiyagarajan

excellent explanation sir.....

Bhavesh Solanki

excellent sir

Albert Cuspinera

Hi,
On what depends the type of likelihood distribution?
Thanks,

Jonathan

too many basic errors: "distribution closer to .5" such a claim is not even formally defined

Viet Ta

Wow, my understanding acquired from this video is more than from dozen of hours on classes.

Euro szka

I shouldn't be saying that loud but dunno about you, I find this prior distribution & Ledoit-Wolf shrinkage method for accrued efficiency very difficult to picture and don't get me started on these affecting eigenvalues instead of eigenvectors... it's a mess in my head right now... I really need to pull myself together

André Neves

Please could you indicate some friendly material about bayesian inference?

David Lauenstein

Thank you. The first video that makes me understand this reasoning in one go.

강동현

Woo

valor36az

great explanation

Nadine

would someone please tell me what is he saying at 0:28 ? thank you

Theta in statistics

Share on facebook
Share on twitter

An introduction to the concept of a sufficient statistic

36 673 views | 15 May. 2018

Explains what is meant by

Explains what is meant by the concept of a ‘sufficient statistic’, and how these summary statistics are important in likelihood-based methods.

This video is part of a lecture course which closely follows the material covered in the book, "A Student's Guide to Bayesian Statistics", published by Sage, which is available to order on Amazon here: https://www.amazon.co.uk/Students-Guide-Bayesian-Statistics/dp/1473916364

For more information on all things Bayesian, have a look at: https://ben-lambert.com/bayesian/. The playlist for the lecture course is here: https://www.youtube.com/playlist?list=PLwJRxp3blEvZ8AKMXOy0fc0cqT61GsKCG&disable_polymer=true

vsenderov

This video sufficiently well explained the concept.

Paul Hubenig

blue print on blue background. Am I the only one who is having trouble reading this?

Pedro Ribeiro

Thanks so much! That was a good explanation about sufficiency!

Theshan Dilanka

thanks

Monica A H

which other book can we refer for this topic?

L.A.C

For example what if you didn't know the MLE of the joint is the sample mean, how would you use the sufficient statistic to help you estimate theta?.....because that is the main point....the main point is that sufficient statistic are suppose to help you make estimation of an unknown parameter when MLE or Bayes Estimation fail....in the cases when MLE or Bayes Estimation work then it is pointless to use a sufficient statistic, like in your coin flip example.

Harvey G

Hi there, thanks a lot for the video! I think that the sign in front of (N-t) should be a + not a -

jason chen

OMG you are 300% better than my professor of inference on conveying a concept

Moo?

Thank you! this really helped

Abdulhamed Chribati

Well that was a sufficient video, thanks!

Hanna William Damarjian

A sufficient statistic is a function, but why is it a function (i.e. how was it developed to be a function)? What does the importance of the function have to deal with helping to predict (say) a central of tendency or a measure of spread for a sample? What exactly does P(theta given t(X)) really mean? Can I not just compute the sample mean and standard deviation of my data values rather than performing this idea of a sufficient statistic? It can be tedious, but why would I need the sufficient statistic? I am just really confused. Could someone please break down fully (piece by piece) on what this all means? I have an idea on what the likelihood function is, but sufficient statistic really is confusing me because I initially believed it was a numerical quantity that is used for representing some sort of descriptive statistic. In reality it is a function with conditions involved but I don't understand why or how it all comes together.

Wiesje van den Heerik

Thanks for your video! How do you get to the logL? Where does t come from all of a sudden?

AndetSTK

This was a sufficient explanation ;)
Seriously though, great video

L.A.C

my problem is that you are using an example where MLE would get you a unique reliable estimator....sufficient statistics should be presented in cases where MLE or Bayes Estimation don't work or are cumbersome to generate.

Tyler Forrester

Thank you, Ben. This is a wonderfully clear explanation of sufficient statistics.

Kara Liu

should be logL = tlogθ + (N-t)log(1-θ)

Hugo B

Thank you for the video ! ( ps: you substracted the logs instead of adding them together)

aaaaaawdaww

Good and logical explained!!! Just like a good prof has to do!!!

Jack Bryde

Thank you!

Agustín Cabanas

Ben, when you write down P(theta | t(X)) you are implictly giving away that you are Bayesian? Isn't the parameter a fixed unknown constant? I am asking this because in most textbooks one is usally told that a statistic T is sufficient if and only if the conditional distribtuion of the sample given t is independent of theta, but your way of approaching the issue is much more intuitive, setting P(theta | t(x)) = P(theta | X)) then t(X) is sufficient for theta.

Hongduo Sun

Great explanation! thanks!

Jason Zavaglia

Awesome - thanks Ben

Ted Woodsides

thank u sir

Ben Thomas

Thank you Ben! Very helpful

Walker Xuan

Thanks for your well-organized explanation!!

Theta in statistics

Share on facebook
Share on twitter

Statistics theta

191 views | 7 Sep. 2016