YOUR ACCOUNT

Login or Register to post new topics or replies
Skybase
2D/3D Generalist

Posts: 4025
Filters: 76
I've gotten my hands on getting DeepDream to work on my laptop. There's no one single compiled app but after a bit of tinkering I was able to get it to work. I decided to feed some images of mine into the thing which results in pretty neat, convoluted images of what the AI sees and what we're supposed to see.

  Details E-Mail
Skybase
2D/3D Generalist

Posts: 4025
Filters: 76
These are using the default set of images that Google's neural network was trained on, hence the results appear much similar to the set of images released by Google.

  Details E-Mail
Skybase
2D/3D Generalist

Posts: 4025
Filters: 76
Here's a Buddhabrot.

  Details E-Mail
xirja
Idididoll Forcabbage

Posts: 1698
Filters: 8
Hell yeah man! Haven't tried messing with that stuff yet. Here's 4 of yours that might be hot with less contrast on the input:

https://www.filterforge.com/upload/for...lorful.jpg
https://www.filterforge.com/upload/for...test03.jpg
https://www.filterforge.com/upload/for...3%20PM.JPG
https://www.filterforge.com/upload/for...exture.jpg

Edit:

Indeed it looks like low contrast and low brightness gives the machine what it needs to dream!









_____________________________________________________

http://web.archive.org/web/2021062908...rjadesign/
_____________________________________________________
  Details E-Mail
SpaceRay
SpaceRay

Posts: 12298
Filters: 35
Thanks very much for showing this, as I did not know nothing about this and seems tobe cool and interesting, although regretably this seems to be only for programmers or someone that knows how to use this

As this is not a normal executable file that you can install as it is from Github deepdream


As you are an expert in this, Could you be so kind to explain HOW can this be used, for other non expert people?

How to install this and be able to use it and feed the images? How does it work?

These Google "Deep Dream" images are weirdly mesmerising

Wired magazine WEIRD results from Google deepdream

I wonder HOW these amazing images can be done using this new DeepDream?

Now You Can Turn Your Photos Into Computerized Nightmares With 'Deep Dream

Deep Dream article

---------------------------------------------

Philip K. Dick's made the "Do Androids Dream of Electric Sheep?"

well now there is an answer smile;) smile:D

Yes, androids do dream of electric sheep

and many more in this google search
  Details E-Mail
Skybase
2D/3D Generalist

Posts: 4025
Filters: 76
Yeah I figured low-contrast images do better than high contrast ones. Here's a photograph of some fireworks I took. Looks like it found another universe beyond it.

For those of you who have extensive amounts of difficulty here's an online tool that may take long but requires no installation of libraries of any kind: http://psychic-vr-lab.com/deepdream/

Although your picture becomes "public". Just be warned of that.

  Details E-Mail
Skybase
2D/3D Generalist

Posts: 4025
Filters: 76
More!

  Details E-Mail
Skybase
2D/3D Generalist

Posts: 4025
Filters: 76
Test of high contrast images... it didn't work well... but the stuff inside the dots is pretty interesting.

  Details E-Mail
Ghislaine
Ghislaine

Posts: 3142
Filters: 270
Yeah... stuff Inside the dots is very interesting. Love it and also animals in your image.
  Details E-Mail
xirja
Idididoll Forcabbage

Posts: 1698
Filters: 8
Totally. smile:)

Not sure if this is true https://twitter.com/M_PF/status/616922809399410688 , but it looks like it is.

Hopefully other images can be used to train the thing. For crying out loud! smile:D
_____________________________________________________

http://web.archive.org/web/2021062908...rjadesign/
_____________________________________________________
  Details E-Mail
Skybase
2D/3D Generalist

Posts: 4025
Filters: 76
Well the network is trained based on a large data set you can grab online. If you can grab hold of the variables google's seeing and calling in the image on the first iteration, you should be able to reverse engineer the thing.

This is just a guess btw.
  Details E-Mail
SpaceRay
SpaceRay

Posts: 12298
Filters: 35
I have seen these two links

How to install DeepDream with and without programming experience

How do I make my own deepdream images?

Google DeepDream on YOUTUBE with many videos available with examples and also on how to use it

Youtube Google DeepDream search

Quote
SKYBASE

Yeah I figured low-contrast images do better than high contrast ones


Good advice and tip to know

Awesome and amazing artworks you have done, you surely know how to use it and understand how it works to be able to make such beautiful example that are much better than other simpler deepdream images

I admire how you are really an expert in graphic design and can learn easily and fast how to use new tools and new software, and you have creative ideas, as shown here.

I have not yet seen if I can really be able to use it myself without having programming knowledge, maybe I should have to wait that someone makes a GUI or visual version with easy standard installation, I think that maybe if this gets more popular there will be a version with an interface for non-programmers

Quote
Skybase
For those of you who have extensive amounts of difficulty here's an online tool that may take long but requires no installation of libraries of any kind


Is this the same thing as the real one with the same features or is a cut down version, and also is not good that is made public
  Details E-Mail
xirja
Idididoll Forcabbage

Posts: 1698
Filters: 8
Keyword: The Thing smile:D



Now all we need are king crab upside down man heads.
_____________________________________________________

http://web.archive.org/web/2021062908...rjadesign/
_____________________________________________________
  Details E-Mail
Skybase
2D/3D Generalist

Posts: 4025
Filters: 76
Spaceray, the feature set doesn't change. What's different is that you can't change variables around. Which doesn't necessarily make better or worse images. So basically, it's the same thing.

Quote
I have not yet seen if I can really be able to use it myself without having programming knowledge, maybe I should have to wait that someone makes a GUI or visual version with easy standard installation, I think that maybe if this gets more popular there will be a version with an interface for non-programmers


Probably happening. It's just that Google released the product as-is with various dependencies that require relatively specific installation methods. But that's "right now" the code is open source so keep your eyes out for it!!
  Details E-Mail
SpaceRay
SpaceRay

Posts: 12298
Filters: 35
Explaining in some possible way how DeepDream works and what it does

Artificial Neural Networks Can Day Dream–Here's What They See

and the extended explanation in some way is found here

Inceptionism: Going Deeper into Neural Networks

Dockerized deepdream: Generate ConvNet Art in the Cloud - Brain-dead simple instructions for programmers


I have just found that seems to be a new alternative deepdream release that seems to be simpler although still only for programmers that understands it

Quote
Dockerized deepdream: Generate ConvNet Art in the Cloud

github.com/VISIONAI/clouddream

Google recently released the deepdream software package for generating images like
which uses the Caffe Deep Learning Library and a cool iPython notebook example.

Setting up Caffe, Python, and all of the required dependencies is not trivial if you haven't done it before! More importantly, a GPU isn't required if you're willing to wait a couple of seconds for the images to be generated.

Let's make it brain-dead simple to launch your very own deepdreaming server (in the cloud, on an Ubuntu machine, Mac via Docker, and maybe even Windows if you try out Kitematic by Docker)!

Motivation

I decided to create a self-contained Caffe+GoogLeNet+Deepdream Docker image which has everything you need to generate your own deepdream art. In order to make the Docker image very portable, it uses the CPU version of Caffe and comes bundled with the GoogLeNet model.

The compilation procedure was done on Docker Hub and for advanced users


Read the whole and complete text on the link

It may be brain-dead simple for programmers because I do not understand it and do not know what is being talked about, or I am brain dead? smile;) smile:D

Also it seems that all is command line based, and I personally do not like any software based on command line without any graphical interface, sorry that I am a visual person and do not feel right with words only based software that is only for programmers and coders

Really cool fractal style DeepDream video

Quote


Here's What Google's Trippy Deep Dream AI Does To A Video Selfie

Totally mind-bending.


Great and awesome fractal style compositions on the video, really cool


Quote
Skybase

Spaceray, the feature set doesn't change. What's different is that you can't change variables around. Which doesn't necessarily make better or worse images. So basically, it's the same thing.


Thanks, maybe the features doesn´t change, but if you can´t change the variables, I suposse that it may make a big difference, or not?

Is like in comparison, having filter forge filters that render one image with one preset and you can´t change any of the values?

As said, I think I will wait for an easier way done for non-programmers

TEST with the online tool

It puts "computer is now dreaming" and some sheeps jumping smile:D smile:D smile:D

It may take maybe a WEEK to complete? smile:?: smile:?:

Maybe this is because there is a huge list of people uploading images and it takes time to process them all

  Details E-Mail
Skybase
2D/3D Generalist

Posts: 4025
Filters: 76
Quote
It may take maybe a WEEK to complete?

I suspect the server is just flooded.

It's very CPU intensive as a process. It's basically iterative, as in it continuously searches the database as it renders the image. The larger the image, the longer it takes. Hence it's kinda unrealistic to do poster art (right now) to do this. It's just kinda like a nerd thing.

Yeah just wait for a GUI version. It'll take quite a while before it happens is my bet but people are clearly working on it.

Quote
I suposse that it may make a big difference, or not?

Is like in comparison, having filter forge filters that render one image with one preset and you can´t change any of the values?


I guess it kinda depends on what you input. Basically you can call parts of the code up to do even crazier things and you do have to write your own little functions if you want a bit more. But in general, changing just a tad bit of variable doesn't visually affect anything, more so the process.

Overall, it's a bit non-programmer right now but the general method via docker is relatively easy if you're computer savvy and have experience working with that type of thing. I'm no programmer, but I was able to pull this off so I personally think it's not that bad.
  Details E-Mail
Rachel Duim
So Called Tortured Artist

Posts: 2498
Filters: 188
This is through a service that sends it on for you to Deep Dream. Can't run the app from my Mac, graphic card is Intel. The size is reduced at deepdreamit.com.

Math meets art meets psychedelia.
  Details E-Mail
SpaceRay
SpaceRay

Posts: 12298
Filters: 35
Quote
Skybase

It's very CPU intensive as a process. It's basically iterative, as in it continuously searches the database as it renders the image.

The larger the image, the longer it takes. Hence it's kinda unrealistic to do poster art (right now) to do this. It's just kinda like a nerd thing.


Interesting to know this, so if the time is in relation with the size, is like it happens in filter forge, so this is then like some of the slow filters on filter forge that is unrealistic to use them for higher resolution, unless you want to wait many hours for the render.

Quote
Rick Duim

The size is reduced at deepdreamit.com


I have tested this website that offer very simple upload and automatic, but it seems that is also flooded OR they want that you pay 1.99$ for each image made faster smile:?:

When I tested it now there was 2561 images before mine for free

  Details E-Mail
Skybase
2D/3D Generalist

Posts: 4025
Filters: 76
Maan.... lol finally it has come down to this.

Not that it's wrong, just hope $1.99 goes into the right causes.

Well this bothers me, so send me 1 image you wanna see google deep dream on and I can process it. It's gotta be relatively small so don't expect print quality out of this.


[UPDATE] After a bit of tinkering I was able to produce larger images although it takes significantly longer to work itself. You can try sending me reasonably large images, but not too large. I doubt it'd work with ridiculously huge resolutions. It's pretty RAM heavy as well.
  Details E-Mail
SpaceRay
SpaceRay

Posts: 12298
Filters: 35
I wonder why some of the examples are really the same image with a weird psycodelic overlay, and other are really VERY different compositions from the original one with added figures and additional images. Maybe is because this seems to use some kind of fractal and you have to configure it in some way to make cool artworks.

Quote
Not that it's wrong, just hope $1.99 goes into the right causes


Yes, well for me I think is wrong to charge for each image, and I would not pay it, but it seems that it will go to pay for the server costs and maintenance of the system

Quote
Well this bothers me, so send me 1 image you wanna see google deep dream on and I can process it. It's gotta be relatively small so don't expect print quality out of this.


Thanks for the offer, but is does not matter, I am not in any way desperate to use this and do not want to use it now, and I do not want to bother you, that surely have other betters things to do.

Quote
After a bit of tinkering I was able to produce larger images although it takes significantly longer to work itself.


When you mean "larger" images what resolution or image size are you refering to ?

I think that if this could work with 4000 x 4000 would be enough

And how long (minutes or hours) does it take to make it?
  Details E-Mail
SpaceRay
SpaceRay

Posts: 12298
Filters: 35
DeepDream has reached Pinterest and Flickr (and many more places)

Pinterest googles deep dreams algorithm and inceptionism

One of the many Flickr album pages

DeepDream images collection Flickr by Kyle McDonald

Also seems that there is another website that processes DeepDream images

https://dreamdeeply.com/
  Details E-Mail
Skybase
2D/3D Generalist

Posts: 4025
Filters: 76
hehe I think we're kinda over killing it to a point where it's getting a bit boring.
  Details E-Mail
Rachel Duim
So Called Tortured Artist

Posts: 2498
Filters: 188
I took the liberty of enlarging the following image so that I could look at it. It was flat as a pancake brightness & color wise, so I punched it up a bit with Vibrance. Poor Kiko, the scientific experiments, oh the agony smile:evil: (computed in a matter of weeks from deepdreamit.com)

Math meets art meets psychedelia.
  Details E-Mail
SpaceRay
SpaceRay

Posts: 12298
Filters: 35
OH! Rick Duim, this is the evolution of future cats that may have multiple eyes, and I wonder if they would move indepentendly from the main 2 eyes smile:D
  Details E-Mail
SpaceRay
SpaceRay

Posts: 12298
Filters: 35
It has been already a month since this appeared, has there been any news about it having a new tool or new GUI, or is it still the same?

I mean that maybe someone could have made anything new to be able to use in an easier way that may have appeared in some news
  Details E-Mail
Skybase
2D/3D Generalist

Posts: 4025
Filters: 76
RealMac Software has created DeepDreamer which gives you the deep dream stuff with options
http://realmacsoftware.com/deepdreamer/

We are also starting to see Deep style, which is allowing the computer network to learn art ... so it can reapply them to other images like image filters. This is actaully very very cool so check it out. You will love it.

http://www.qarl.com/qLab/?p=106

https://imgur.com/a/ujf0c

You can grab the source here:
https://github.com/jcjohnson/neural-style

Alternatively:
https://github.com/kaishengtai/neuralart

These are processor heavy so I would say it'll take a while before it reaches the common market.
  Details E-Mail
Rachel Duim
So Called Tortured Artist

Posts: 2498
Filters: 188
Here's a snapshot of DeepDreamer. Didn't know what I was doing, this is what came out. First it looks an awful lot like reaction diffusion. Second, it's not free as you can see from the partially crippled screen (they want $14.99US). Pretty cheeky for a beta product, we used to call this crippleware.

Math meets art meets psychedelia.
  Details E-Mail
Rachel Duim
So Called Tortured Artist

Posts: 2498
Filters: 188
Here is one more, I was able to get DeepDream to turn Kiko into a reptile with a fish coming out of his ear. Will I buy it... I can't decide if this is useful or just a curiousity.

Math meets art meets psychedelia.
  Details E-Mail
Skybase
2D/3D Generalist

Posts: 4025
Filters: 76
It honestly looks fine, it works perfectly for me. The first snapshot came out that way because your settings: Layer 3A 1x1 and your iteration count etc. DeepDream by default has various alternative settings accessible via class which you can load up to produce pretty intense images. Keep in mind that Deep Dream does seem to favor image with low contrast for more interesting results.

I think, under RealMac's philosophy for dev costs, charging $14.99 is technically fair, judging fr om the actual stability and speed of the product. However, it's a curiosity. I can run Deep Dream via Docker with a couple steps and it only takes a couple copy-pastas of code before I get other results. It's a bit slower, but I didn't pay for my results.

If anything I'd rather drop off money for Deep Styles which recently hit waves of interested people. That looks more entertaining to me than mashing up your pictures into psychedelic bonkers of dogs and weird caravans.

Ultimately I feel DeepDream is supposed to be an exploration into deep learning algorithms and that the image making function we see today is really a byproduct. It's open source for the reason that it opens people to ideas about the future wh ere machine learning can possibly make lives easier and more intersting.
  Details E-Mail
Rachel Duim
So Called Tortured Artist

Posts: 2498
Filters: 188
I agree with the open source philosophy of it, but by it's very nature no image created by it is truly "yours". So for the public philosophy, I'm all for it. The average person (whatever that is) needs to see that technology is just another tool, another brush, another shovel. It can do good things.

As you mention, it seems that "eyeballs" and dogs and other objects show up too often, the technology appears to be somewhat limited at this point. Might be how the data is sliced and the limitations of the pattern recognition that is going on. It is amazing that it works at all!

Given that it is open source, I think I'll wait to see if someone comes up freeware for the Mac that doesn't require a specific GPU. For now open source is not a solution for most Mac users without the required hardware.

I agree that $14.99 is fair, but they could have done a better job (say watermarking) instead of disabling a 1/3 of the screen. And it does say Public Beta all over it, I would think a short trial with export disabled would make more sense. But that's my 2 cents, 3 with inflation.

I will look at Deep Styles next.
Math meets art meets psychedelia.
  Details E-Mail
Skybase
2D/3D Generalist

Posts: 4025
Filters: 76
Quote
As you mention, it seems that "eyeballs" and dogs and other objects show up too often, the technology appears to be somewhat limited at this point. Might be how the data is sliced and the limitations of the pattern recognition that is going on. It is amazing that it works at all!


Well what you really do is change the caffe model to other things, for example, there are a couple freely available online that recognizes places, flickr photographs. Again the intention of those models is for it to be able to recognize what humans see daily as well as places. You can swap out the default model for another relatively easily if you're working off the version that I'm using, but the RealMac version probably doesn't let you do that due to copyright restrictions of some of the models (i.e. Flickr trained model). Also the fact with the default model is that it probably has an absurd amount of dogs and other animals in it. So that's really a bias you're seeing.

Models are basically "trained" to recognize forms so you can make it recognize anything. For example you can use it for handwriting, daily objects, places, and you can of course get more specific with these. As an example, there's already a green-screen cutout method that utilizes deep-learning methods to specifically produce accurate greenscreen chroma keys.

Quote
Given that it is open source, I think I'll wait to see if someone comes up freeware for the Mac that doesn't require a specific GPU. For now open source is not a solution for most Mac users without the required hardware.


You also shouldn't need GPUs to run the images through. It's clearly faster but the process is relatively fast enough.
  Details E-Mail
Rachel Duim
So Called Tortured Artist

Posts: 2498
Filters: 188
Here is neural grown at home! I managed to get it to work after installing (and sometimes reinstalling) so many packages I lost count. I got the source from
Neural-Style
and then went on a quest installing one dependency after another. This one is run through torch7 and Lua on Mac OSX 10.10.4 (LuaJIT, actually!).

I took two images around 3000 pixels wide, one for the style:


... and one for content:


I did a small image so I did not have to wait (512 pixels wide, took 15 minutes for 200 iterations) and punched it up with levels in Photoshop. Here it is:
Math meets art meets psychedelia.
  Details E-Mail
Skybase
2D/3D Generalist

Posts: 4025
Filters: 76
Very nice! smile:) If you have time, can you write up a rough installation guide? Otherwise don't worry, I'm probably just going to try it myself anyway.

Here's another interesting one: This one includes animation features.
https://github.com/mbartoli/neural-animation
  Details E-Mail
Rachel Duim
So Called Tortured Artist

Posts: 2498
Filters: 188
Rough installation guide... well, I'll tell you off the top of my head what you will need for Mac OSX 10.10:

Xcode 6 or higher (if you have to do this, it's over 2GB, go to lunch)
Python and ipython (installed Anaconda to get these)
Lua (with LuaJIT)
torch7
loadcaffe (installation is wrong,
sudo apt-get install libprotobuf-dev protobuf-compiler does not work.
Get Homebrew (brew.sh),
then run brew install protobuf )

There are smaller steps for libraries etc, I would have written this down if I knew it was going to be 10 steps or more. Let me know if you run into an issue somewhere, I will try to recall how I did it.

WARNING: This is alpha software, limited error checking, crashes easily and runs quite slowly if the output is over 1000 pixels wide. It is both a CPU hog and uses up main memory quite easily (on a 16 GB system) and starts swapping memory in and out, slowing considerably when this happens. It is all command line, no GUI.

Here is the next attempt using the above images, now 1024x768, took 6 hours:

Math meets art meets psychedelia.
  Details E-Mail
SpaceRay
SpaceRay

Posts: 12298
Filters: 35
Thanks very much Skybase for the Links you have put, good to know that there is an alternative even it is not free and cost 15$, but is worth to have a GUI and it work right, it may then worth to buy it, and is not a fortune to spend.

Thanks also very much to Rick Duim for the examples and comments
  Details E-Mail
Skybase
2D/3D Generalist

Posts: 4025
Filters: 76
Thanks Rick for taking the time on that!

So I had some of those already installed fortunately, but it sounds like a bad idea right now especially when I'm doing pretty important work on this laptop. heh. Oh well.... I played around with deepdream a lot after installing all those dependancies. Somebody later came up with a docker install version which exponentially made it easier to install the whole thing in so hopefully that sort of thing happens soon enough.

The real reason I was trying to install deep style was to see if it's capable of replicating my artistic style for stuff. For example the images below:

http://skybase.deviantart.com/art/The...-536485144
http://skybase.deviantart.com/art/A-View-365852847
http://skybase.deviantart.com/art/Sno...-365853586

like those sloppy-square designy things I sometimes do which makes stuff really abstract. But I felt there's potential here to make some kind of art piece that auto-generates a picture in my style (theoretically) which lends myself to somebody capable of technically living forever as long as the machine isn't destroyed. It's just a concept art thingy I had in mind.
  Details E-Mail
Rachel Duim
So Called Tortured Artist

Posts: 2498
Filters: 188
Neural-Style works pretty well for 512 pixel wide output, but it quite slow for anything larger than that (as mentioned above). I'm doing a 1280 pixel wide output image for 300 iterations now. It's up to 250, I'll post the results. Estimated time to complete: 22 hours.

This program needs a UNIX box with terabytes of memory for larger image output. It's amazing the Mac can handle the swapping in and out of memory. The algorithm is quite impressive given the research nature of it (MIT license). JC Johnson is actively updating the project LUA files, so it's a work in progress.
Math meets art meets psychedelia.
  Details E-Mail
Rachel Duim
So Called Tortured Artist

Posts: 2498
Filters: 188
Don't try this at home (unless you have 22 hours of dedicated CPU on your hands). Here is 300 iterations at 1280x960.

Nice "square" artworks, BTW. I think Neural-Style would work well with them as style content, especially those with reduced colors.

Math meets art meets psychedelia.
  Details E-Mail
SpaceRay
SpaceRay

Posts: 12298
Filters: 35
It seems that this is growing and more popular
  Details E-Mail
Skybase
2D/3D Generalist

Posts: 4025
Filters: 76
I thought I'd sit down and figured might as well install neural-style. So after dealing with broken files everywhere thanks to myself not taking care of anything.... I got it to run! Oh boy super excited, decided to threw in some sample images.... oh boy, this is going to take a while.

But amazing indeed. My MacBook Pro despite being what it is, surprisingly performs regardless.

  Details E-Mail
Skybase
2D/3D Generalist

Posts: 4025
Filters: 76
Ok so it works. Not the best of qualities but it works. It's a mixture between my cat and a digital painting I did a couple years ago.

Although at this rate it's clearly faster running this on GPU. I think it's time for AmazonEC2. I still have a lot to read on but as far as my understanding goes, but the benefits are clearly there.

The thing is, I've been following a community of deep-dream enthusiasts and they've already setup an AMI to use. Otherwise considering to tinker with AmazonEC2 to make my own mini renderhub. The thing is a lot of my work is starting to get processor-heavy. It's just a pain in the ass waiting for the darn computer to compute a couple particle simulations.

  Details E-Mail

Join Our Community!

Filter Forge has a thriving, vibrant, knowledgeable user community. Feel free to join us and have fun!

33,711 Registered Users
+18 new in 30 days!

153,533 Posts
+38 new in 30 days!

15,348 Topics
+73 new in year!

Create an Account

Online Users Last minute:

32 unregistered users.

Recent Forum Posts: