Software Engineer (iOS - ForeFlight) 🖥📱, student pilot ✈️, HUGE Colorado Avalanche fan 🥅, entrepreneur (rrainn, Inc.) ⭐️ https://charlie.fish

  • 31 Posts
  • 31 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle
  • This worked!!! However it now looks like I have to pass in 32 (batch size) comments in order to run a prediction in Core ML now? Kinda strange when I could pass in a single string to TensorFlow to run a prediction on.

    Also it seems to be much slower than my Create ML model I was playing with. Went from 0.05 ms on average for the Create ML model to 0.47 ms on average for this TensorFlow model. Looks like this TensorFlow model also is running 100% on the CPU (not taking advantage of GPU or Neural Engine).

    Obviously there are some major advantages to using TensorFlow (ie. I can run on a server environment, I can better control stopping training early based on that val_accuracy metric, etc). But Create ML seems to really win in other areas like being able to pass in a simple string (and not having to worry about tokenization), not having to pass in 32 strings in a single prediction, and the performance.

    Maybe I should lower my batch_size? I’ve heard there are pros and cons to lowering & increasing batch_size. Haven’t played around with it too much yet.

    Am I just missing something in this analysis?

    I really appreciate your help and advice!


































  • This is probably going to change in the near future. A lot more functionality is being added to Echo+, and so a lot of this stuff will probably change soon. A lot of users however choose to only read content from platforms such as Lemmy. So the base app (free) focuses on reaching to reach what the maximum amount of users do on these platforms.

    I truly appreciate your feedback tho. It’s something I really do take to heart, and will continue to think about and assess. Any updates will be posted at [email protected].