Messages 91 - 125 of 125
First | Prev. | 1 2 3 | Next | Last |
Vladimir Golovin
Administrator |
SpaceRay, yes, we're busy with FF10 (and with the site redesign, as the current site is embarrassing in 2020), but The Next Big Thingy is being actively developed.
The current state is as follows: - We're going with Vulkan compute shaders (as opposed to CUDA). - GPUs are still weird and hard to program. - Current goal: a tile-based architecture (for images larger than GPU memory). - Most recent success: a gaussian blur running on a tile-based engine. |
|||||||
Posted: October 28, 2020 8:53 am | ||||||||
Grimbly
Posts: 68 |
Site redesign? For FF? I like the FF website though ![]() It performs and loads well, looks nice, and doesn't at all resemble this whole metro UI Windows 10 crap everyone is spamming all over these days. Big squares and rectangles of flat color with no rounded corners or gradients that are far too oversized for the elements they contain. Disgusting. Please don't go that route. Think more like something an adobe air app would have had instead ![]() |
|||||||
Posted: October 29, 2020 1:56 am | ||||||||
Sphinx.
![]() |
*looks for the 'like' button.. in vain* |
|||||||
Posted: October 29, 2020 8:24 am | ||||||||
FFCreator
Posts: 66 |
Will "The Next Big Thingy" work with the existing Filter Forge filters users submitted? So can it use from day 1 the (> 13000) existing Filter Forge filters? |
|||||||
Posted: October 30, 2020 7:48 am | ||||||||
FFCreator
Posts: 66 |
Have you tried partnering with other companies that have a lot of experience in GPU coding? E.g .: BluffTitler (Outerspace-software) You want shared code for rendering in Windows and MacOS? |
|||||||
Posted: October 30, 2020 8:08 am | ||||||||
Vladimir Golovin
Administrator |
No, we haven't. Our demands are very very specific (a tile-based renderer for huge bitmaps with floating-point precision, written in Vulkan), and it's extremely unlikely to find that expertise. Speaking of BlufTitler specifically: based on their example videos, it looks like they're doing regular rasterization with native polygons and shaders, which won't help us much.
Ideally, yes. But as of today, Macs don't support Vulkan. At this moment, we're focusing on getting the renderer up and running, so we currently disregard portability issues -- which means that our initial versions will run on Vulkan only, not on Metal. |
|||||||
Posted: November 1, 2020 1:58 am | ||||||||
SpaceRay
![]() |
I agree that Vulkan is better than CUDA, as it will work on all the graphics cards and not only Nvidia or AMD as the initial specifications stated that Vulkan will work on any hardware that currently supports OpenGL ES 3.1 or OpenGL 4.x and up... And it gives a great perfomance and is the best choice, so you have done it right.
It is true that MacOS does not support Vulkan directly and natively But I have a found a workaround to be able to use Vulkan in MacOS and it works correctly
Sorry that I do not know nothing about how to program for GPU, but reading this part "without having to rewrite your app that already uses Vulkan" above, it seems that this may be perhaps possible to help you use what you have already done for Windows AND use it in MacOS directly without having to rewrite the code. Want to run Vulkan on iOS and macOS? Use MoltenVK API also this Benefits of the Vulkan macOS SDK Here are explained and commented the instructions to be able to use Vulkan on Mac (scroll down on the text to see the instructions) Developing Vulkan Applications for macOS, iOS, and tvOS And also this video showing a 24 minute tutorial for developers Vulkan Development on MacOS Setup Guide And some revies Khronos Brings Vulkan to macOS, iOS, After Apple Refuses to Vulkan is coming to macOS and iOS, but no thanks to Apple ![]() |
|||||||
Posted: November 1, 2020 4:12 am | ||||||||
Vladimir Golovin
Administrator |
We know about MoltenVK, but we have very strong doubts that it will work in our case right out of the box. We currently assume that it won't. Even vanilla Vulkan is difficult to get to work on officially supported hardware, and I can only imagine the levels of pain from adding another layer into the mix.
|
|||||||
Posted: November 1, 2020 3:59 pm | ||||||||
David Roberson |
This comment is kind if a throwback to 2018, but for some reason I lost track of this thread at about that point. In reference to the Slope Blur/Iterated Distortion examples, I had a question to interject at this point:
To either kirkl13 or Vlad, can the "Min" blending option be acheived with the RGB Math Min node? I've never been entirely clear what the difference between the two Min nodes is. |
|||||||
Posted: May 12, 2021 3:17 pm | ||||||||
traycaa
Posts: 13 |
Hi Vladimir,
Nice to see that the team is striving to take Filter Forge to the next level. I have some concerns regarding GPU acceleration for this software. I am of the opinion that part of the charm of this software is that it can be used without requiring an expensive GPU. The fact that this program runs only on software is what also makes it very usable imo, since it can be run anywhere. I also believe that having to support multiple graphics cards and drivers can be very daunting and may cause alot of bugs, crashes and breaking that can take alot of development time and resources, which in effect will lead to less time implementing and improving features. Another problem that GPUs pose is the limited vram memory, which can cause great issues when rendering out high resolution images. I believe when it comes to non realtime rendering, using system memory has always been a better solution as it is cheaper to afford large amounts of system memory. I believe there are still software based solutions to improve the speed and features of the app. Have you looked into Intel Embree? Maybe that can provide the performance optimizations necessary for sampling. https://github.com/embree/embree In the end I believe it is a decision between speed and accuracy. I favour accuracy, which is unfortunately the less popular side to take. Best of luck with the development of this program ![]() |
|||||||
Posted: May 21, 2022 2:03 pm | ||||||||
Vladimir Golovin
Administrator |
Hi traycaa -- sorry it took me so long to respond.
>> without requiring an expensive GPU Tehcnically, we already have a code generator that can emit both GPU and CPU code. For example, we're considering using the GPU for the main output renderer, and using the CPU to render thumbnails for the node tree in GUI (because using the GPU for those is a bit complicated). Theoretically we may use the CPU for the main output too. However, there may be divergencies in output between GPU and CPU because these are completely different execution paradigms. I guess we'll try it with thumbnails first and see how it goes. >> support multiple graphics cards and drivers can be very daunting Yes. That's exactly why we're using Vulkan. However, the biggest problem with Vulkan is not the multiple graphics cards -- it's the Mac. There's no official implementation of Vulkan for Mac. Yes, there is the unofficial MoltenVK but we have no idea whether it works in our case or not. We need to take the thing off the ground first, then we'll look into portability. >> Another problem that GPUs pose is the limited vram memory We're doing a tile-based renderer with a virtual memory system. This is one of the reasons why the entire thing is taking so long. For example, the memory system took us at least a full year to develop. |
|||||||
Posted: June 1, 2022 4:44 am | ||||||||
traycaa
Posts: 13 |
Hi Vladimir,
Thank you for responding ![]() It's good to know that these issues were thought of and are being addressed. It gives me faith that the program is heading in a smart direction. I am excited to see this tile based rendering system being implemented and also a turing complete node system is exciting as well. Best of luck and I believe in you and your team! ![]() |
|||||||
Posted: June 4, 2022 2:38 am | ||||||||
Vladimir Golovin
Administrator |
>> Best of luck and I believe in you and your team!
Thank you ![]() We're now pushing for a "playable prototype". We're currently integrating the node-based GUI and the execution engine (both CPU and GPU parts) into a single app that you can actually run, create and save files, produce and export output, etc. The Turing-completeness is already done (we were able to implement Poker in it, purely on nodes), and the main fear that remains is how interactive the execution engine would turn out to be. For example, in Filter Forge, you see preview and thumbnails on nodes updating immediately after you change something. In the sequel, we have to 1) re-generate the CPU code, 2) recompile shaders, 3) kill the current execution process, 4) spawn a new execution process, 5) feed the generated code to it, 6) order the process to run the code, and 7) get results from it via inter-process communication. This may turn out not as interactive as I hoped it would be (we already have some optimizations in mind though). |
|||||||
Posted: June 6, 2022 10:31 am | ||||||||
traycaa
Posts: 13 |
Awesome, I will be one of the first to join the testing for this prototype.
Sounds like really complex steps to get the view updated, hopefully that can be optimized! ![]() |
|||||||
Posted: June 9, 2022 2:03 am | ||||||||
traycaa
Posts: 13 |
Hey Vladimir, Just wanted to know if this tile based renderer would provide us with the benefit of resolution independence that the current sample based architecture has. The advantage with resolution independence is that the final output produces the sharpest, most crisp image. This is what allows filter forge to produce very high fidelity textures imo. I hope that there will still be the same benefits with the new tile based system. I'm already assuming what you mean by a tiled based renderer is that it will sample tile by tile rather than completely shooting samples all at once. So I guess it will still use sample based system which allows for the resolution independence ![]() |
|||||||
Posted: June 15, 2022 7:49 am | ||||||||
Vladimir Golovin
Administrator |
Yes. The current approach is as follows: you define a fill (technically a shader) and you can execute it on any bitmap of any resolution. You can even use the same fill / shader to fill multiple bitmaps at once. ![]() |
|||||||
Posted: June 16, 2022 4:44 am | ||||||||
Vladimir Golovin
Administrator |
We currently have a single GPU architecture implemented -- you define a fill (shader) and execute it on as many bitmaps of any resolution as you want. Inside the shader it's "kinda" sample-based, so yes, it will render in higher detail on larger bitmaps. (Having said that, I wouldn't rule out adding a proper sample-based architecture, but it may require a raytracing-capable GPU). |
|||||||
Posted: June 16, 2022 4:47 am | ||||||||
traycaa
Posts: 13 |
Awesome!
Thank you for the response. Can't wait for the product! ![]() I wouldn't mind utilising a proper sample based architecture since I have an nvidia GPU with Optix support, so it would be nice to have this option as well for people with such GPUs ![]() |
|||||||
Posted: June 17, 2022 1:26 am | ||||||||
SpaceRay
![]() |
Sorry that I do not care HOW it is done, the really important and essential thing is that the new software is able to have GPU acceleration and be able to render much faster the results in the same way that others softwares are able to accelerate and be able to have fast results, and then if this is true and real it will win surely a lot of users and will be sold much more
|
|||||||
Posted: June 20, 2022 5:29 am | ||||||||
traycaa
Posts: 13 |
I love it, what they're trying to do is crazy and exciting!
If they manage to pull this off this can end up being an extremely powerful graphics program of groundbreaking magnitudes. They are currently implementing a system that: - Runs on vulkan based GPU architecture. - Uses a tile based renderer with a complex virtual memory system to support images outside the capacity of GPU memory. - A custom turing complete node based language residing ontop of GPU that allows for powerful expression. !EXCITING STUFF! - Resolution independence for the sharpest quality image. - Support for vector graphics. Vector graphics support is especially ridiculous because there isnt a single renderer out there that allows the creation of vector graphics alongside pixels while maintaining resolution independence. Most sample based 3d renderers have the ability to render out these vectors, but they cant create them on their own. 2d software may have vector and raster support, but the raster support isnt resolution independent. So this feature is especially special since no one else has done it yet ![]() This system is the definition of a high fidelity rendering and graphics application. |
|||||||
Posted: June 22, 2022 4:45 am | ||||||||
Vladimir Golovin
Administrator |
On top of both GPU and CPU. Some functions (i.e. groups of nodes) execute on GPU, some (including all top-level nodes) on CPU. However, the repertoire of nodes that can be placed inside the GPU functions is limited, because GPUs can't execute arbitrary code. For example, our GPU functions can't contain lists, tuples, map / filter / fold nodes, arbitrary bitmap creation and modification nodes, Bezier spline creation and modification nodes, etc. -- while CPU functions can. To implement that, we introduced two concepts, "domains" (currently there are only two -- CPU and GPU), and "citizenship" -- some nodes can be "citizens" of only one domain, and some, like polymorphic Add / Subtract / Multiply nodes, are citizens of all domains. |
|||||||
Posted: June 22, 2022 7:06 am | ||||||||
Vladimir Golovin
Administrator |
The current plan is a separate type for Bezier splines, and some basic nodes for creation (box, circle, ellipse, arc, spline-from-points etc.) and modification (e.g. boolean, outline, and affine transforms). You can export the generated splines as is, and you should be able to rasterize them into bitmaps of arbitrary resolution (and hopefully with a GPU-based fill). Spline nodes are executed on CPU only, but that is a good thing because this automatically guarantees that Splines will work with all our "programming" nodes -- lists, tuples, maps, filters, folds, etc. For example, you can merge multiple splines into a single boolean by putting them all into a list, then fold'ing the list into a single spline with the Boolean operation. |
|||||||
Posted: June 22, 2022 7:10 am | ||||||||
Vladimir Golovin
Administrator |
||||||||
Posted: June 22, 2022 7:14 am | ||||||||
traycaa
Posts: 13 |
AWESOME!
Thank you for all this development info! Those types also look very simple to use, like a much more conveniently designed shading language. Kind of reminding me of VEX in Houdini. Which reminds me, would be really nice to have linear algebra data types as well like Matrix, Vector, Determinant & Cross Product etc. ![]() |
|||||||
Posted: June 22, 2022 11:55 am | ||||||||
Vladimir Golovin
Administrator |
Because they are. That's the entire point. If you have a node that outputs Bitmap, you can plug it into any Bitmap-typed inputs -- and it's the same with any other type.
Yes and no. Some of our types are present in shading languages. However, lists, tuples, strings, characters, etc. aren't present in GPU languages. Our type system (and many of the types) is more similar to Haskell.
I don't see any reason why these cannot be included -- our type system supports any number of types. However, we'll have to decide on the exact nodes and traits / typeclasses for these types. And of course the preview. And the color (if we don't want to dump them into Miscellanea). And the citizenship (GPU, CPU or both). |
|||||||
Posted: June 22, 2022 1:28 pm | ||||||||
David Roberson |
Wow, it's already been four years! This thread came back up in a search for something else, but I remember when you made that statement and I was wondering where things are with that plan. Is FFF still an up-and-coming thing, or has it slipped off into the weeds and gotten lost behind more immediate FF development? |
|||||||
Posted: August 8, 2022 1:55 am | ||||||||
Vladimir Golovin
Administrator |
FFF is absolutely an up-and-coming thing, it is in active development. We're already done with the type system (the thingy that decides what can be connected to what), and we're mostly done with CPU-based execution. At the moment, we're working on the node editor (it's written in Vulkan), and on the integration of GPU and CPU nodes so they can work together. And our current goal is basically the same as it was four years ago -- to make a playable prototype that utilizes both CPU and GPU, can do something practical, and shows a non-trivial speedup on the GPU. However, the entire undertaking turned out to be so enormous that I have no idea when we actually ship that playable prototype. There is still a lot of uncertainty everywhere, from the lower levels, to the higher-level / applied parts. And there are major unsolved problems in the usability department -- for example, what to show in thumbnails for nodes that are a part of a function definition. But, as they say, one foot in front of the other... |
|||||||
Posted: August 10, 2022 4:17 am | ||||||||
David Roberson |
That is excellent news. I understand how it is when your goal is pushed back by an unexpected amount of dev work. I have all but given up on my original plan for my art career, due to endless, frustrating setbacks. I lost most of what I'd built up, project resource-wise, because of repeated hardware failures. I'm now on my fourth computer, and have not managed to recover anything significant. I am glad you're sticking to your plan, and look forward to it's completion. Hang in there, Vlad! |
|||||||
Posted: August 10, 2022 8:40 pm | ||||||||
Vladimir Golovin
Administrator |
Yes, there were (and are) plenty of pushbacks. Project files and Animation which we did for FF took years. And the task itself offers tons of novel problems where you can't just borrow the existing solution from elsewhere (thumbnails in function definitions being a good example). And of course there is always a lingering fear: what if the whole idea is one big dud?
Yep, sometimes it feels like you're playing a roguelike and having a bad run. Though, as of late, I even find strange game-like satisfaction in dodging and dispatching all the crap life throws at you. |
|||||||
Posted: August 11, 2022 4:36 am | ||||||||
Vladimir Golovin
Administrator |
βItβs better to be in the arena, getting stomped by the bull, than to be up in the stands or out in the parking lot.β β Steven Pressfield, The War of Art. BTW, have you read this book, The War of Art? The best 10 bucks I've ever spent in my entire life. I'm on my third re-reading now. There are parts in it that deal with the "all is lost" moment -- which every artist is bound to have sometime during their career.
You mean the files that were stored locally on your computers? |
|||||||
Posted: August 11, 2022 5:04 am | ||||||||
David Roberson |
Yep. After I'd gone to all the trouble to replicate my own work on the cloud and on external drives, I discovered that I was over budget and cancelled the one, thinking I was covered by the external HDs. Then, after the first computer died, I found out that my backups were mostly file locked without it. I kept the drives hoping I'd one day get them open again, but when I deleted my "redundant" copies to free up storage I found out that the backup I kept was completely out of date, and my most recent file version were history. Poor management on my part, but content management was always a side job done while working on something else. The distraction combined with my own absent mindedness to make stupid mistakes. Live and learn, I guess. I still hope I can afford to send the drives from the dead systems in for data recovery. The latest problem I ran into was Daz switching to a new CMS that insisted on installing everything to my SSD, which has no room for it. I am working on a way around it, but lack motivation, since I'd only be restoring owned content I was able to re-download. Without my own assets to work with, I'm starting over from scratch, and just have not been able to face it. |
|||||||
Posted: August 11, 2022 12:56 pm | ||||||||
acire1
![]()
Posts: 55 |
[quote]
Hi David, Sorry to barge in on this conversation which I just happened upon. I know how frustrating stupid but simple mistakes can create a negative mindset. Don't give up on your plans, just save up to send those drives fr om your dead systems for data recovery. That way you don't need to start from scratch, but just carry on wh ere you left off... with a positive mindset. ![]() |
|||||||
Posted: August 11, 2022 3:10 pm | ||||||||
David Roberson |
No worries, acire1. I appreciate the support. Thanks! |
|||||||
Posted: August 13, 2022 4:03 pm | ||||||||
Vladimir Golovin
Administrator |
I'd recommend Backblaze for backup -- $6 per month, and it backs up everything. Though I'd advise to log in every month or so, and check the most recent backup date, to make sure that the backup is being performed. |
|||||||
Posted: August 16, 2022 9:25 am | ||||||||
David Roberson |
Thanks, Vlad!
|
|||||||
Posted: August 20, 2022 3:52 am |
Filter Forge has a thriving, vibrant, knowledgeable user community. Feel free to join us and have fun!
33,711 Registered Users
+18 new in 30 days!
153,533 Posts
+38 new in 30 days!
15,348 Topics
+73 new in year!
16 unregistered users.