Thu. Oct 31st, 2019

Host your Website

Read and learn

FatCow Plan for $3.15/mo. only

Cloud vs Local GPU Hosting (what to use and when?)

1 min read



How to use cloud GPUs: https://pythonprogramming.net/cloud-gpu-compare-and-setup-linode-rtx-6000/
Channel membership: https://www.youtube.com/channel/UCfzlCWGWYyIQ0aLC5w48gBQ/join
$20 Linode GPU and VPS server hosting credit: https://linode.com/sentdex
Discord: https://discord.gg/sentdex
Support the content: https://pythonprogramming.net/support-donate/
Twitter: https://twitter.com/sentdex
Instagram: https://instagram.com/sentdex
Facebook: https://www.facebook.com/pythonprogramming.net/
Twitch: https://www.twitch.tv/sentdex

40 thoughts on “Cloud vs Local GPU Hosting (what to use and when?)

  1. Great insights! If I am not mistaken, I haven't seen you to mention Google's TPUs for training models. In some presentations, they even mention that it is free. (I don't know though to which degree it is free. 🙂 ) Would be interesting to hear your thoughts on Google's TPUs…

  2. Cool video, I wonder why it's not members only though…
    '^^
    With tools like Google Colab, for me the future is clearly Cloud GPU.
    I don't have the money to buy a powerful system dedicated to ML, I don't have the money to maintain it buying newest hardware regularly etc…
    With internet connections going faster and faster, in a few years we'll only have some sort of cheap Raspberry Pi laptops and we'll all do things on the cloud 🙂 ML, gaming, etc…

  3. At what point would you say then the cost of one of these GPUs would need to fall in comparison to the rates? Would a rate at 1,500 hours still leave you in the cloud camp? What about 1,000? Where would you put your rule of thumb?

  4. WARNING:tensorflow:From D:SoftwareSWinstallerAnacondaenvsprojectlibsite-packagestensorflowpythonopsinit_ops.py:1251: calling VarianceScaling._init_ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.

    Instructions for updating:

    what can i do

  5. I'm facing bottlenecks with my 2080ti running rnns and cnns using keras and tf-gpu(I've double checked that GPU is used during training), the only advice i get on internet is to increase my batch size, but i couldn't find better results in terms of cpu v/s gpu training speed.
    I would really appreciate you making a video on Deep Learning using local gpu.

  6. Just messing around with the first couple Fast Ai projects, it took me about 25 minutes to run a res-net 50 (~15 minutes for the res-net 34) with their cat/dog(~7,300 images) data set on a old 1050Ti 4G.
    Not very fun to double the course length just from training the smallest dataset xD

  7. hey sentdex, thanks for your videos, i really enjoy them but I can't really seem to find a way of learning ai stuff for myself as I can't come up with a "real-world scenario". maybe you could do some videos on real world stuff? Like what would you use ai for on mobile apps or really anything that is somewhat "real life"
    Thanks for everything! You are great

  8. "Hey man i will unsub you because of ONE unintentionally mistake. I am triggered and offended by YOU, even though you did tremendous work for us."

    — Generation Ungrateful

Comments are closed.

Copyright © All rights reserved. | Newsphere by AF themes.