Interesting! I’ll try this tonight and see how it goes. Really appreciate your reply tho. I’ll let you know the outcome.
Software Engineer (iOS - ForeFlight) 🖥📱, student pilot ✈️, HUGE Colorado Avalanche fan 🥅, entrepreneur (rrainn, Inc.) ⭐️ https://charlie.fish
Interesting! I’ll try this tonight and see how it goes. Really appreciate your reply tho. I’ll let you know the outcome.
Got it. Thanks for the reply! So is Keras just a dependency used in TensorFlow?
From what I’ve seen TensorFlow is still more popular. But that might be starting to change. Maybe we need to make a PyTorch community as well 🤔
I wish it worked on more webpages. But totally agree.
Thank you so much for checking it out! I really appreciate the feedback. I am considering a few ideas to revamp the subscription. No guarantees yet, but stay tuned to this community for updates.
What? I’m not following. Steam isn’t federating with anyone. This is about having a link to an external site. Nothing more. Has nothing to do with federation directly.
That is so bad. They clearly don’t understand the appeal of decentralized systems…
Submitted!
Just added to my todo list. Hopefully I’ll get around to this today! Thanks!!
Thanks so much. Always open to feature requests if you have any 😉. Stay tuned to this community for updates.
Thanks so much!
Thanks so much! 🎉 Native is the only way to go imo 😝. I’ll make a PR this weekend.
Might also have some comments on the Lemmy API eventually, but I’ll save those for a later date haha.
Thanks for all you do for Lemmy.
Thank you for taking the time to provide such valuable and constructive feedback. I truly appreciate it. Some of these ideas are actually already on the roadmap which is why I’m considering making some of those creation features free.
Stay tuned to this community for updates!
Check back in 24 hours or so. If it still isn’t available, please let me know.
This is probably going to change in the near future. A lot more functionality is being added to Echo+, and so a lot of this stuff will probably change soon. A lot of users however choose to only read content from platforms such as Lemmy. So the base app (free) focuses on reaching to reach what the maximum amount of users do on these platforms.
I truly appreciate your feedback tho. It’s something I really do take to heart, and will continue to think about and assess. Any updates will be posted at [email protected].
Check back in 24 hours (or maybe less 😉)
Check back in 24 hours (or maybe less 😉)
What is your region?
Just posted there. Thanks!
This worked!!! However it now looks like I have to pass in 32 (batch size) comments in order to run a prediction in Core ML now? Kinda strange when I could pass in a single string to TensorFlow to run a prediction on.
Also it seems to be much slower than my Create ML model I was playing with. Went from 0.05 ms on average for the Create ML model to 0.47 ms on average for this TensorFlow model. Looks like this TensorFlow model also is running 100% on the CPU (not taking advantage of GPU or Neural Engine).
Obviously there are some major advantages to using TensorFlow (ie. I can run on a server environment, I can better control stopping training early based on that
val_accuracy
metric, etc). But Create ML seems to really win in other areas like being able to pass in a simple string (and not having to worry about tokenization), not having to pass in 32 strings in a single prediction, and the performance.Maybe I should lower my batch_size? I’ve heard there are pros and cons to lowering & increasing batch_size. Haven’t played around with it too much yet.
Am I just missing something in this analysis?
I really appreciate your help and advice!