I’m looking for a programming language that can help me build a desktop application for Windows, macOS, and Linux that’s not big but not small either. Additionally, I’d like to be able to build a website with the same language. I’ve been considering Ruby, Python, Golang and JavaScript. Python seems to be mainly used for scripting and ai, so I’m not sure if it’s the best fit. JavaScript has a lot of negative opinions surrounding it, while Ruby sounds interesting. Can anyone recommend a language that meets my requirements?
Ever tried to follow a large Ruby codebase like Gitlab? Absolutely nightmare. Not only does it not have type annotations, so you can’t follow code by clicking, but you can’t even follow it by grepping because Rubyists seem to love generated identifiers. Even the syntax of the language makes grepping worse, e.g. the lack of brackets prevents you from grepping for function calls like
foo(
.You’re talking about rails. That’s like saying Kotlin is a terrible language because your only exposure to it is with something that decided to use Glassfish Webfly Swarm and Camel.
You can literally follow code perfectly fine in an IDE like RubyMine. It actually works much better than Python because Ruby is incredibly consistent in its language design, while Python is an absolute mess (same with JS. Try opening a large Python or JS project in PyCharm or WebStorm).
No clue what you’re talking about with grepping though. Use an IDE like I said and you can literally just “Find all usages” or “Jump to declaration”, etc.
In any case, you shouldn’t be using any of these for large projects like gitlab, so it’s completely inconsequential. Saying something like “Java is terrible, have you ever used it for a CLI? It’s so slow it’s impossible to do anything!” is idiotic because of course it is. That’s not what it’s built for. Ruby is a scripting language. Use it for scripting. It kicks Python’s ass for many reasons, JS is terrible for scripting, and while you can use something like bash or rust, the situation is incredibly painful for both.
None of this has absolutely anything to do with the language design. You’re talking about language design and equating it to being terrible and then saying it’s because you don’t use any sort of tools to actually make it work.
Maybe other Ruby code is better, but people always say Rails is the killer app of Ruby so…
That only works if you have static type annotations, which seems to be very rare in the Ruby world.
Well, I agree you shouldn’t use Ruby for large projects like Gitlab. But why use it for anything?
I’ve literally never heard anyone say that…
no. it literally works for any ruby code in any project. you do not need static type annotations at all. I can tell you’ve literally never even tried this…
because it’s a fantastic scripting language with a runtime that is available on almost every platform on the planet by default (yes most linux distributions include it, compared to something like python which is hardly ever included and if it is it’s 2.x instead of 3.x). It’s also much more readable than bash, python, javascript, etc. so writing a readable (and runnable everywhere) script is dead simple. Writing CLIs with it is also dead simple, while I think Python has a few better libraries for this like Click, Ruby is much more portable than Python (this isn’t my opinion, this is experience from shipping both ruby and python clis for years).
Well you didn’t listen then. Google the phrase.
I do not need to try it to know that this is fundamental impossible. But I will try it because you can go some way towards proper type knowledge without explicit annotations (e.g. Pycharm can do this for Python) and it’s better than nothing (but still not as good as actual type annotations).
Bash definitely. Not sure I’d agree for Python though. That’s extremely readable.
Jump to declarations or usages has absolutely nothing to do with types so I have no clue why you think type annotations to make jump to useful.
Oh really? How would an IDE go-to-definition on
x.bar
in this code?def foo(x): return x.bar
Best it can do is heuristics and guesswork.
By using the AST? Do you really not know how languages work? I mean seriously, this is incredibly basic stuff. You don’t need to know the type to jump to the ast node location. Do you think that formatters for dynamic languages need to know the type in order to format them properly? Then why in the world would you need it to know where to jump to in a type definition!?!
Edit: also in the case of Ruby, the entire thing runs on a VM which used to be YARV but I think might have changed recently. So there’s literally bytecode providing all the information needed to run it. I highly recommend reading a book about how the Ruby internals work since you seem to think you understand but it’s quite clear you don’t, or for some reason think “jump to” is this magical thing that requires types.
I think you’re getting a bit confused. How do you know where
x
’s type is defined and therefore wherex.bar
is defined if you don’t know what typex
is? It’s literally impossible. Best you can do is global type inference but that has very big limitations and is not really feasible in a language that wasn’t designed for it.Not sure if that is a serious question, but it’s because formatting doesn’t depend on the type of variables but going to the definition of a field obviously depends on the type that the field is in.
Maybe my example was not clear enough for you - I guess it’s possible you’ve never experienced working intellisense, so you don’t understand the feature I’m describing.
class A: bar: int class B: bar: str def foo(x): return x.bar
Ctrl-click on
bar
. Where does it jump to?