Zwabel’s Weblog

June 21, 2009

KDevelop4 UI: Areas, Working Sets, etc.

Filed under: KDE,KDevelop — zwabel @ 9:58 pm
Tags: , , , , , ,

General Progress
A lot is happening in KDevelop4 these days. Now it’s nearly already 2 Months ago that we had our developer meeting in Ukraine. We had a lot of fun, although for me the trip started two days late. I didn’t get my passport in time, damn. But once there, I got quite productive. Since I have limited time these days, that hack sprint motivated me to touch the one big outstanding architectural issue yet existent in the duchain. I knew it was a painful mammooth task, that’s why I kept my fingers off it for a long time.

I have implemented a special reference-counting mechanism for the duchain that uses the standard convenient C++ way of achieving the thing: In the constructor of an object, increase a counter, and in the destructor of an object, decrease it again. Now finally _everything_ in the duchain is reference-counted, even extreme light-weight objects like “IndexedString”, which is nothing more than a wrapper around an integer representing a string.

That object is used everywhere, and since the reference-counts are disk-persistent, increasing them is not as cheap as usually. So here’s the clue: It is only done when the memory-area the constructed/destructed object is in is actually marked as a “disk persistent” area, that will be stored to disk later, or is already on-disk. That means that it has near-zero runtime overhead for most of the object usages. During shutdown the objects are all sweeped, and the ones without a persistent reference-count are cleared away. The most complicated task was getting all the already existing duchain storage schemes work nicely together with such reference-counted containted objects.

I got it ready about 1 week after I was home again. Two weeks later it actually was stable. Anyway, it was worth it. :-)

Apart from this architectural thing, the focus is mainly on polishing now. Tons and tons of bugs and crashes were fixed.

Apart from bug-fixing, I’m trying to use my limited time to move KDevelop4 forward in some of the other areas that need it most. After all, a good C++ support alone is not enough of a selling point for an IDE, and I have some ideas about how KDevelop 4.0 should look. Also, I sometimes feel the intense need to do something “creative”.

Areas
Now KDevelop4 has a feature called “Areas”. From what I know it’s comparable to the Eclipse “Perspectives” feature with some slight differences.

Each area contains a distinct set of tool-views, and toolbars, tailored for a specific task (Currently we have “Test”, “Debug”, and “Code”). In difference to Eclipe Perspectives, areas may also contain different sets of files. We actually thought about renaming them to “Perspective” for consistency, but then again “Perspective” implies looking at the same thing just from a different direction, while an “Area” is actually a different working-space, like a different table, where you use different tools to work on different items. So we’ll probably stick with this terminology for now.

Before the hack sprint, there was a little dropdown list in the toolbar to switch areas. But in our opinion this was not very usable, given that we want areas to be a central part of our UI concept. You could never see what other areas there was, and you always needed one click too much to switch them. So we discussed different mechanisms for area-switching. My initial idea was quite simple: Use tabs. It’s the concept that fits best. Just some additional non-removable tabs somewhere at the top, and areas would be totally intuitive and logical to use. The others were a bit more in favor of using separate toolbuttons, and after I wasted a few hours trying to hack something together with tabs, I gave it up. Alexander Dymo then created area-switcher toolbuttons, probably similar to the way Eclipse does perspective switching.

However a few weeks after being home again, I started feeling that these toolbuttons don’t work. How is it intuitive that you click a toolbutton, and suddenly you have completely different files open? And how could we automatically switch the area when we start debugging, without making the user crazy? Also, toolbars are generally “optional”. And if we want to make Areas a central concept, we cannot make the area-switching optional. We shouldn’t give the user a chance to break his UI. :-)

So I started again doing mockups about how the UI could look, here’s the evolution:
http://zwabel.files.wordpress.com/2009/05/mockup.png
http://zwabel.files.wordpress.com/2009/05/mockup.png
http://zwabel.files.wordpress.com/2009/05/mockup3.png
Niko:
http://www.vivid-planet.com/upload/vertical-tabs2.png
Me again:
http://zwabel.files.wordpress.com/2009/05/vertical-tabs3.png
And the final version:
Mockup
Just by the way you may be wondering why there suddenly is so few wasted space: On the hack sprint, Alexander Dymo removed both status-bars, moved the editor-information(line+column) into the file-tab line, and moved status indication in the bottom toolbar into the bottom dock switcher, so both status bars are gone.

If you’re wondering about the additional highlighting of the current area tab: We were a bit worried that tabs in the top-right would be somewhat out of user focus, the user might not notice when the area automatically changes, and that it might also be a bit confusing having multiple levels of tabs in the same user-interface. The highlighting moves the thing more into the user focus so changes are directly recognized, increases general awareness, and makes it generally look like a different widget then the document tabs, which reduces confusion.

Now after all the mocking, which I actually just did to start some discussion and gather some ideas+opinions, I suddenly found the hidden QMenuBar::setCornerWidget function, which actually allows implementing the last mockup in a relatively clean way. I sat down for an evening, and ended up with exactly what you see on the last mockup. It needed some additional hacking to get the added highlighting, to have the icons on the left side while having a general right-to-left layout, and to make the tab separator line fade out to the left, but it works, and it is solid. And at least my personal experience shows, that this is very usable and very intuitive, while not wasting a single pixel.

So far so good.

Working Sets
A few years ago, shortly after I joined the KDevelop project, there was a lengthy discussion about tabs in general, and whether/how they could be made useful. The problem: From a specific count of documents upwards, tabs are completely useless to get an overview. And due to the easy navigation in KDevelop, it easily happened (and still happens) that suddenly 20 documents were open, making tabs completely useless. At the beginning of KDevelop4, some developers were tired of the uselessness of tabs, and completely removed them in favor of a dropdown list. However me and some others couldn’t live with that. The problem: You replace something that sometimes becomes useless with something that is always useless. There has always also been a document list toolview, that shows all currently open documents. Those who don’t want tabs probably use that. However, as I know from using kate, even a document-list becomes nearly useless from a specific count of documents on.

So at that time, I had the idea that instead of trying to create an utopic widget that allows easily managing an infinite amount of open documents, just allowing the user better ways of managing the set of documents he is currently working on, so he can easily keep the count of open documents in a tab-manageable area. My idea of achieving this at that point was using working sets. A working set is a specific set of files, the files you really work on. For me, when the open document-count grows into an unmanageable area, that usually comes from either working on multiple problems at the same time, or from a lot of browsing through different documents. The core development activity is usually only focused on a relatively small set of documents. A working set allows grouping small lists of documents together, archiving and restoring them, easily merging, splitting, duplicating, and easy moving of files from one working-set into the other. Such a mechanism would allow keeping the count of files manageable: As soon as you start working on another task, just close the whole current working set, and start your new task with a clean list of documents. As soon as you want back one of the old files or the old working-set, just restore it. Paired with a good user-interface, this might well create a new and more efficient paradigm of working.

My idea was that each working-set would be represented by a unique icon somewhere in a permanently visible part of the UI, so you can easily access them.

However since I was very busy with C++ support, I never came back to this idea. But now suddenly, that I was doing that area-switching stuff, it came back into my mind:
– Areas have different sets of documents, so if they should be really easily usable, it should also be easy to transfer files from one area into the other.
– Due to the area-tabbar I have added, there suddenly is a perfect place where those working-sets could live: At the left side of it.
– KDevelop4 also supports multiple main-windows. How to synchronize or move documents between them? Working-sets would make it a breeze.

Combined with the advantages above, this just created too much temptation for me not to try it. So within the last weeks I piece by piece created full working-set support in KDevelop4.

The hardest part was adapting the background management part of KDevelop4’s UI framework, and until a few days ago it suffered from frequent crashes. But now it seems to finally be stable, so I can announce it for you to try out.

How the UI looks now:
kdev4_ui_working_sets
At the left side of the area-switcher, you see the icons for all existing working-sets. Currently that is only two. The icons are taken from several other KDE Applications. In long term, we need a unique set of icons that are totally association-free in the software world for usage in the working-sets. But for now, the most important thing is that each set has a different icon. The area-switcher itself shows the currently active working-set within the switcher, so you see which working set is active in which area. Also there’s an additional working set icon left to the document tabs, to make clear that they belong to each other, and make it yet a bit clearer and easier to use.

When you click onto one of those icons, the clicked working set is loaded into the current area, or it is closed if it is the current set, allowing you to create a new one by opening a new document.

kdev4_working_set
When you hover a working set icon, you get a very useful tooltip, showing you the contained documents, allowing you to load or unload single documents with one click, and to delete or close the entire working set.

kdev4_working_set_2

kdev4_working_set_3
This is how it looks when you’re in the debug area, with a different working set open than in the code area. Working-sets are fully synchronized, so if you activate the same working-set within both areas, the areas transform into Eclipse perspectives, as they both always contain the same documents.

KDevelop4 Beta4
On Monday KDevelop4 will go into a mini-freeze with only bugfixes allowed, before releasing the next beta middle of the week. We want to make sure to release a high-quality and stable beta. We have released beta3 just a week ago, but that was a bit premature, as it doesn’t contain some features that we want feedback about, and there were quite a few important last-minute bug-fixes that we would have liked to add, but the release process was already a bit too far at that point.

April 9, 2009

Nicer Direcory Thumbnails, and Thumbnail Sequences

Filed under: KDE — zwabel @ 8:41 pm

When I did my last blog post, there were some comments that said that the folder previews didn’t look as nice as they could. I agree, luckily, one day later Fredrik Höglund came up with a patch to do some more complex and nicer painting on the items. Now the previews are layed out like a bunch of physical photos, with some random rotations, a white border, and drop shadows:
folder_previews_new

In my opinion, this looks very nice. The fact that the items are rotated randomly takes away some of the regularity that tends to be annoying, and the borders and shadows make it look like “real” objects.

I had some more ideas how the folder-previews, and previews in general, could be made even more useful.

Typically the problem with automatically generated previews for “sequence items”(Like folders, videos, etc., as opposed to simple images) is that you can never be sure that you pick a part of the sequence that is really useful to describe the content.

For example a video thumbnail taken right at the beginning of a movie, will most probably just show an empty black surface. A folder preview may show pictures that are not useful for describing the folder content. Now on several places on the internet, they have found a simple solution: When moving the cursor over the item, jump through different thumbnails from alternative locations in the sequence, to get a better description of the item.

This seems very reasonable, since the probability that one of the sequence items shows something “interesting” is a lot higher than it is for the part chosen automatically for the initial thumbnail, so I’ve spent some of the evenings this week bringing this functionality to KDE.

When an file/directory is hovered, the sequence of thumbnails generated for that item is alternated, showing another thumbnail every second, using a fast fade animation that is friendly to the eye. Currently the only thumb-creator that supports this feature is the one that creates thumbnails for directories, but once there is thumbnailers for videos again, I hope the developers will also implement the interface to support this feature. Since I’m too lazy to do a screencast, I will just attach another screenshot:
folder_previews_iteration1
Actually all you can see here, is that the icon under the cursor shows different preview pictures than the unhovered version above. You get the Idea.

April 6, 2009

KDE for Painters

Filed under: KDE — zwabel @ 11:40 am
Tags: , , , , , ,

My mother is an artist(A painter, see website). Since me and my brother have to administrate her computer, and she’s using Windows XP, we always have to deal with all kinds of problematic implications that come from this. Linux is a lot better suitable for remote administration
, and since she’s already using Thunderbird and Firefox, one should think that this isn’t such a hard switch.

So we tried it, and we nearly got her go crazy. Because she was using Windows Explorer before to manage her pictures, she has accumulated a really strange structure for them: Hundrets of folders, each with between 5 and 50 pictures, and all in a flat tree, and all badly named.

There is only one way of keeping at least a minimum level of overview in this mess: By recognizing the content of the folders using folder previews. Although this isn’t such a complicated thing, there seems to be no linux file manager that has this feature, until now.

I’ve implemented this feature within the KDE preview generator, so it works within all KDE applications now:
folder_previews
Isn’t it beautiful? Actually, the fact that it is beautiful is due to Peter Penz, who put the previews into the folder icon after I implemented the actual previewing. Now all we need is some more speed in preview generation, and it would be perfect.

This is one of the killer features my mother needs, and there is one more. She usually takes photographs of her paintings. The photographs tend to also show the borders, and are never perfectly even. Now there is a proprietary application on windows where you can just select the borders, and have the application perfectly map the picture into a rectangular and correctly scaled form.

Gimp comes relatively close to this with its inverse perspective transform tool, but it doesn’t change the pixel-size of the image to correctly reflect the size of the selected rectangle. Also it is a too complicated tool for such simple tasks.

However digikam also has a perspective transform tool in showfoto, and showfoto seems generally to be a very good and simple application for doing simple image adjustments like brightness/contrast etc., so this application would be a perfect match. However that tool also didn’t support inverse transform and resizing the image to the content yet.

Since it as well lives in the KDE source tree, this was also a simple attacking point for me. Here you can see the result, already present in the current digikam beta release:
Here you select the area that will be inversely mapped into a rectangular shape(I didn’t have an actual painting available):
inverse_transform_pre
And the result:
inverse_transform_post

So to make it short: If you’re a painter, then KDE 4.3 is for you. :-)

March 30, 2009

Pushing Immature Technology onto the User

Filed under: KDE — zwabel @ 10:52 pm
Tags: , , , ,

KDE4’s new Desktop shell called Plasma is meant to do everything better then KDE3’s kickoff+kdesktop did. Innovations over innovations, exciting stuff is going on there. The development goes on in a high pace, and with every release there is new features all over the place. It would be a really great project, if there wasn’t one tiny little problem: It is the core desktop shell of one of the 2 major linux distributions, still they refuse to care about a large part of their current/potential users. Yes, I’m talking about the poor souls who don’t have good enough hardware for full desktop composition, don’t have good enough graphics drivers for stable desktop composition, or the rebels like me, who simply cannot stand the slight lag always feelable on a composited desktop.

I don’t even want to open a discussion on graphics drivers here. Fact is, I have very good graphics hardware, but when using composition, you _always_ pay a price for it, especially when you have a 3280×1200 Desktop setup, and when your system is under heavy load. As desktop composition becomes more popular, the drivers are maturing, but still the general architecture is far from perfect.

Apart from that, a 2000 MHZ computer with 512 MB of ram and without a 3D accelerator card should be able to run any good desktop environment without problems, and it should even be able to look good. There is no technical reason speaking against it. I don’t consider Windows Vista a good desktop environment in this regard btw.

I also don’t want to start a discussion on desktop composition in general. I want to start a discussion on the way those people are treated, who do not want to jump on using the newest immature technology, or simply aren’t able to.

Now along came Plasma. It has tons of beautiful themes available that are downloadable through GetHotNewStuff. The only problem: Most of those themes look like total crap when composition is disabled, because plasma does not allow the panel to blend over the underlying desktop without desktop composition. 100% exact transparency by definition can not be achieved without composition, but all desktop environments except KDE4 support something called “fake transparency”, where the panel uses a blended version of the underlying wallpaper as background, which leads to a nearly correct result, with the only downside that windows covered by the panel are not visible through it. But seriously, who puts windows under his panel, and wants to see them?

However, and I knew this before, the plasma developer consider something like that an evil ugly hack, and don’t want to put something like it into plasma.

Since I’m an aesthetically sensitive person, I got tired of the grey brick at the bottom of my right screen, and put a few evenings into finding out how hacky it would really be to make it look nice. It couldn’t be that hard, after all the plasmoids on the desktops themselves also use the same software aka. fake transparency. And behold: Due to the fact that the desktop and panel live in the same application, and because of the logical API, in the end it turned out quite easy to do, and quite un-hacky. It works unbelievably well: Wallpaper blending animations, moving plasmoids under the panel, or putting animated plasmoids under it works exactly as expected. Here you can see the result. About 80 added lines of code, no evil stuff, no API added, and this result:

Before (Actually this is one of very few themes that don’t look like total crap without composition):
non_transparent_panel

After (To my eyes, about 100 times more beautiful than before):
transparent_panel

Still, not the slightest interest in adding this. To them it might be a hack, but to me, it is the only way of achieving a nicely looking desktop without composition. 80 Lines of code, for at least 36% of all linux users(According to this survey, in my experience it would be even more).

Instead, I get told that I should use composition(btw. games run a lot slower with nvidia just from enabling it in the xorg.conf), I get told that drivers are getting better, and I get told that hardware is getting more powerful. And this is where I see a basic problem with plasma: They seem to be developing for the future, and only give a small part of their attention to the present.
I don’t care whether future drivers will be better, I don’t care whether future system tray specifications will be better, when at the same time my desktop does not look nice, my systemtray doesn’t work properly, and my krunner doesn’t run the commands I type. KDE4 is a technology of _today_, and should work _today_, for everyone.

This goes for all of KDE4: I sincerely hope that in future, we can find a better balance between innovative development, and present usability.

March 29, 2009

Portable Meta-Information

Filed under: KDE — zwabel @ 9:55 am
Tags: , , , ,

KDE4 is all about new technologies, and standardizing. Now we have a central mechanism to store metadata, called Nepomuk. However it basically still follows the somewhat problematic approach that all the metadata is stored in one central place.

I think there is nothing more valuable than the data of the user, and meta-information like for example ratings of a song, tags, or comments attached to a file, are user-generated data, that needs to be treated as carefully as the files themselves.

I have already used many different applications in my lifetime, different email-applications, different music-players, image-management software, etc., and all kept the user-generated meta-information closed within the application, which means that when the lifetime of the application is over, the information is lost, or with luck, can be exported with some effort into some re-usable format.

Due to those experiences, application-specific meta-information has only a low value to me. I think, for the future, we need to find a way to keep the users data together, so it is as persistent and approachable as the files themselves:
– When the user copies his photo archive or backs it up to a CD, no matter what application he uses, meta-information like ratings, comments, or tags, have to move together with the photos
– When the user has a fresh install, and copies his photo archive from a CD to the disk, the meta-information for the photos should be just there
– User-generated meta-data should _never_ be lost just because a file/directory was renamed, a mount-point changed, or whatever
– User-generated meta-data should not be lost when a file completely unrelated to the item is damaged or deleted(Database)
– In 20 years, when KDE4 is history for a long time, and I find an old photo backup CD, the meta-data should still be readable

When these conditions are met, then metadata would finally be worthy. But how can it be reached?

I think with Nepomuk and Strigi we have most of the needed infrastructure available, there is just a few missing pieces:
1. Store user-generated file-related meta-data directly where the file is stored, in a standard format, example:
File:
/media/archiv/pictures/picture1.jpg
User-generated meta-information:
[/media/archiv/pictures/.picture1.jpg.meta] or in shared directories: [/media/archiv/pictures/.picture1.jpg.meta.nolden]
Could contain something like:
RATING=2/5
TAGS=funny,family

2. Change file-managers to move/copy meta-information together with the files when handling them individually(I think this already is the case in dolphin), and delete the meta-information when the file is deleted
3. When finding orphaned meta-information, ask the user what to do withit(Don’t forget: It’s valuable information)

Strigi could collect the information from those meta-data files, and nepomuk would manipulate them. Nepomuks database would be a kind of cache for the metadata.

The whole behavior should be standardized among desktop-environments at some point, so the meta-information would not only be persistent, but also accessible from within every application.

With this reached, I could finally start doing using image- or music-rating, tagging, etc. without having the feeling of wasting my time in my stomach.

What do you think?

Update:
Actually probably the best way would be this:
picture.jpg
picture.jpg.meta
With the meta-information not hidden at all, so you will be aware of it when using the command-line. Aware file-managers like dolphin should hide the meta-information automatically, and all other file-managers that are not aware would show it. I think as long this would only be used for user-generated meta-information like ratings, it would be worth it.

March 13, 2009

Really rapid C++ development with KDevelop4

Filed under: KDE,KDevelop — zwabel @ 8:52 pm

Code Assistants
When developing for a statically typed language like C++, there usually is quite a bit of redundancy during the development, especially when creating a completely new piece of code. A powerful IDE with deep code understandic theoretically could save a significant amount of the writing work. My goal with KDevelop4 is to allow the user only to concentrate on the “content” of the code, without wasting too much time with creating or adapting declarations in several different places.

To reach this goal, code-completion is not enough. Sometimes it is not possible to properly guess what the user wants to do during typing, but once a statement is completed, it becomes clear. Also the completion-list is not suitable as a user-interface for everything.

During the last weeks I have implement an Assistant architecture within KDevelop. In general it is kind of similar to the bulbs or paperclips known from several office applications, with the main difference of actually bein useful. :-) An assistant can watch the happenings in the editor, duchain, etc., and pop up a non-intrusive popup with some keyboard-accessible options as soon as the assistant thinks it can do something useful for the user.

Declarations/Definitions
The first assistant I implemented already more than a week ago was one that could automatically adapt changed function-signatures of declarations and definitions. Personally I hate having to do exactly the same thing twice, thus this thing compes very handy. As soon as you significantly change a definition- or declaration-signature, you will see this:
signature_assistant_11
At the bottom you see the assistant popup. Every popup has an associated action with a number, and you can execute the action using the ALT+Number combination. So you will get this effect:
signature_assistant_2
This is already a quite useful assistant, since it saves you from a part of C++ that I personally sometimes find a bit frustrating. But not any more. :-)

Automatic Declaration Creation
There is other much more significant types of redundancy when programming for statically typed languages. One such example is iterator names. Why do I always have to write out their name completely? Even with code-completion, it sucks, since the iterator variable type is logically completely determined alone by the value you assign to it. Now with KDevelop, you can save a lot of this. If the type of the variable is determined by the assignment, just don’t write the type by yourself, but let the IDE do it for you:
local_declaration_assistant_1
Just push ALT+1 and get this:
local_declaration_assistant_21

Now when you’re designing an algorithm, you can just write as if you were writing python, and let KDevelop create the variable declarations for you:
variable_declaration_assistant_1
The assistant gives you this:
variable_declaration_assistant_2
But it gets even more interesting. If you try calling a function that does not exist yet, you will get this option:
function_declaration_assistant_1
And the assistant will give you this, notice that even the return-type has been correctly matched to the context:
function_declaration_assistant_2

This also works within the local class:
local_function_declaration_assistant_1
Here the result, notice that the return-type is automatically a reference when you assign something to it in the call:
local_function_declaration_assistant_2
Together with all the other conveniences of KDevelop4, like automatic adding of includes, automatic creation of function-definitions, a class-wizard that correctly places all the includes and writes added files into the CMakeLists.txt, the Qt Documentation Integration, Code-Browsing, etc., KDevelop is a really productive IDE.

Development Update
This is one of the few large features that I yet wanted to implement before a release, now there’s not many major features left on my todo list. Although I’m quite sure I will get some more ideas, the next major task is improving the usability, killing all the little bugs, and improving the performance and scalability of the duchain store so it doesn’t get slow once some size has reached.

The other parts of KDevelop are doing ok, but unfortunately the debugger still hasn’t made it into a usable state, it’s the one big gap that’s still there in KDevelops functionality.

March 6, 2009

Typedefs in Templates, and Code-Completion

Filed under: KDE,KDevelop — zwabel @ 2:28 pm

Sometimes you have to decide between being “correct”, and being user-friendly.

Also, sometimes you have to do one painful change with many regressions, to reach an ultimately better state.

I hope I had to do the last such step before the stable KDevelop release(Though you never know). I have changed the internal representation of the C++ DUChain, so typedefs spawn custom types, instead of being just pointers to the targets. This is not exactly what the C++ standard says, but this means that KDevelop will no more replace std::string with “std::basic_string<blah bla>” if you implement a function or do other simple refactoring stuff.

There is some problems with generally doing this though, because for example in a template container like “std::list”, you want the types in the completion-list not to show “std::list::reference_type”, which also is a typedef, but instead the type you gave to the container. So how should this be done to be most userfriendly, while still staying correct enough?

I’ve implemented this simple logic for the completion-list: If the typedefs target type recursively contains less template parameters, show that one, else show the typedef type. I’m quite sure you can construct a case where this does not work as expected, but for 99% of all cases, it should show the nicest thing that could be shown.

But there is other problems with representing typedef types as real types. The C++ standard explicitly states that typedef types given as template parameters, spawn exactly the same template instantiation as the typedefs real type. For that reason, a typedef has to be resolved before doing any template stuff. If this would be done, you as well be back to “std::basic_string<bla bla>” as return-types in “std::list<std::string>”, so a decision had to be done here.

I have decided to spawn different templates for typedef types, so that the user will see the nices possible representations.

And here the glorious results:
typedef_1

typedef_2

Unbelievable that such a simple-looking thing can be so painful. :)
The good thing is: After some time of finding all the regressions, KDevelop is better than ever!

February 13, 2009

KDevelop4: Creating a Qt slot, the cool way

Filed under: KDE,KDevelop — zwabel @ 10:52 pm

In an earlier blog-post I’ve already written about automatic signal/slot matching and completion(See this). The code-completion box shows you the appropriate connectable signals and slots, and also shows exactly what signals match what slots. Now what if you have a signal, and you know want to connect to it, but you don’t have a matching slot yet? In the last days, I’ve implemented a new feature, that allows automatically creating a matching slot with the typed name, exactly matching the signature of the connected signal.

See this example:
signal_slot_completion

A signal is being connected, but there is no perfectly matching slot(Or maybe you want another one). Now you can just continue typing, and you will see this item in the list:
signal_slot_completion_creation
Now when you execute this completion-item, KDevelop will automatically create the slot within the declaration of the local class, and will nicely complete the current connect(..) call, pointing a the new slot:
signal_slot_completion_creation_ready
Here you see the declaration that was created within the header-file:
signal_slot_completion_creation_ready_declaration

Together with the implementation-helpers, this will allow really rapid programming: Just pick your signal from somewhere, let KDevelop create the declaration and finish the connect(..), go to the place in the source-file where you want to implement the slot, and let the implementation-helper create a stub declaration for you. :-)

Development update
Except for this, I’ve mainly done smaller bug- and crash-fixes since my last blog.
One interesting development is that Hamish Rodda has started fixing up our old version of the integrated debugger, which is nearly a straight port from KDevelop3, so we might have a usable debugger soon. He’s doing it in a branch. The debugger that is in trunk has some additional interesting features like hover-tooltips, and internal refactoring has happened since the original maintainer considers the code of the old debugger a mess, but suffers from the fact that it doesn’t properly work, and seems to be quite a bit away from it, at least that’s what those who tried to fix it found out.

February 4, 2009

KDevelop4: Automatic include-directives and forward-declarations

Filed under: KDE,KDevelop — zwabel @ 1:09 am
Tags: , , , , , ,

Missing Include Completion
C++ is a great and powerful programming-language. Yet it has the downside against some other languages, that you always have to deal with include-directives or forward-declarations before you can use a class.

This is a factor that often motivates me not to create too many different source-files, although design-wise that would make sense. Wouldn’t it be nice if you could just start hacking without caring about the whole visibility stuff? Especially when you have to do the same includes again and again, and already know the library you’re using and it’s classes, this is nothing more than an annoyance.

I had implemented a feature to automatically add include-directives within KDevelop 4 about a year ago, but it wasn’t as comfortable and useful as it could be. During the last days, I’ve taken the time to polish up this feature so it’s worth a blog-post.

The whole thing is based on the DUChain, and it respects all declarations from within the global duchain store. This means that from the moment on that KDevelop has processed a source-file, you can start just using the declarations from within that file from anywhere. Whenever you type something and the completion-list does not offer any completions, which means it’s not visible, then the missing-include completion will start searching the duchain store for matching declarations, and when it finds some, it will offer you right within the completion-list to automatically add an include-directive for you.

This works in many different situations: If you try calling a function or constructing a class, template-instantiate a class, if you try to access the contents of a class that is either unresolved or has only been forward-declared, or if you just type in the name of an arbitrary declaration.

Sometimes however when writing header files, you do not need the whole class-definition but rather just a forward-declaration. From today on, the very same list will show you an entry to automatically add a forward-declaration for the typed class for you. This even works with template-classes, and it correctly respects namespaces, creating namespace declarations around the forward-declarations as needed.

Together, under the circumstance that KDevelop already knows the classes you are working with, this frees you of one annoying part of the hacking.

Examples

This is what happens when you try calling a known function that has not been included yet:
missing_include_completion_function_call

And this is the effect you will have when you push the up-arrow and execute the “Add #include …” action:
missing_include_completion_function_call_result

This is what happens when you simply type the name of a known class:
missing_include_normal_completion1

And this is what you’ll get when you pick the “Add forward-declaration” action. A fully valid forward-declaration was inserted. Well, apart from the fact that std::allocator is not defined. ;-)
missing_include_normal_completion_result

Now std::vector is forward-declared. But what if you actually try using it? KDevelop will notice it, and will offer you including the correct header for being able to use it:
missing_include_member_completion

And this is what happens when you try to instantiate a known but not included template. Notice that it does offer to include the forwarding-header “QMap”, and not only the header that really contains the class which is “qmap.h”:
missing_include_template_completion

And now a little demonstration how this can help you using most any library:
icore_1

See that ICore has a lot of members that again return pointers to other classes, that of course were only forward-declared. This usually means that before you can use the result of activeSession(), you have to find out where KDevelop::ISession is defined, and include it manually:
icore_2

With KDevelop4, the content is only one key-press away:
icore_3

Disclaimer
This is a feature for experienced users who know what they are doing. It always has to be reviewed, and KDevelop does NOT do any programming for you. ;-)

Development Update
Although I don’t have that much time any more, I did find enough time to do quite some polishing during the last weeks. Many crashes and bugs were fixed, and I even found time to imlement some smaller features.
Here the most important points:
– The implementation-helper and signal/slot completions now are shown within separate groups in the completion-list, that are shown in an appropriate place(usually right at the top)
– Access-rights is now fully respected by the completion-list, including friend-declarations.
– Not found declarations can be highlighted with an error-underline now(There’s a new “highlight semantic problems” option in the configuration)
– Added context-sensitive code-completion for builtin C++ keywords
– Full code-completion, use-building and refactoring for namespace names
– Less annoying automatic completion, because it auto-hides when an item in the list is matched
– Workaround a multi-threading problem within kdelibs and ksycoca that caused KDevelop to crash very often
– Fix a repository-locking problem that made kdevelop reproducably crash under special circumstances

Although we do not have all the features we want together, we will be releasing a beta-version of KDevelop4 soon, because we believe it’s already a useful application. Aleix Pol Gonzales has started working on the much needed documentation integration, and Hamish Rodda has today started some new discussion on the debugger, and is planning to bring it into a usable state.

January 8, 2009

C++ IDE Evolution: From Syntax Highlighting to Semantic Highlighting

Filed under: KDE,KDevelop — zwabel @ 3:19 am

Most of us developers are so acccustomed to syntax highlighting, that we couldn’t live without it. Within the last years, it happened to me a few times that I had to look at C++ code with an editor that does not have it. Every single time my initial feeling was that I was looking at a large unstructured text-blob, totally unreadable. Putting some additional cognitional energy into the task, I was able to solve the problem in the end, but there is no doubt that syntax highlighting does increase productivity, it is not just eyecandy.

Syntax Highlighting
Now the first interesting question is: What exactly is it about syntax highlighting that makes the text easier to work with?

When trying to understand what the code does, we usually first try to recognize its coarse structure. For that, we need a fast overview of the code. With that overview, we can decide where we want to continue concentrating on. Now the problem is, while building this overview, really many words have to be scanend. Actually reading all of them would take a long time, and would be very annoying. Highlighting specific words in deterministic colors helps us reducing that load, by giving us familiar orientation points and patterns that our eyes can “hook” on, and allows us finding the specific position we’re searching faster.

So syntax highlighting helps us keeping an overview or finding the place we’re searching for. However it can _not_ help us actually understanding the code, because by the pure definition of “syntax”, it can only highlight by what the code looks like, not by what the code means, since that requires wider knowledge.

Semantic Highlighting
To overcome that limitation, deeper knowlege of the code is required. Right from the beginning, the DUChain in KDevelop was designed to represent exactly that knowledge, and more advanced code-highlighting was one of the basic motivation points behind creating it at all.

So how can it help? There is a few points to this:
1. Additional structure. Due to a now much wider applied highlighting, the code has a lot more colorful structure, which might lead to the same benefits syntax highlighting in general brings. However this is arguable. To some, the additional structure might even seem chaotic, since it’s just too much structure for them. It is definitely something you need to get used to.

Example: Look at this piece of code without semantic highlighting. It optically contains 2 big blobs of code, that to me, already being used to semantic highlighting, look quite unreadable.
semantic_highlighting3
Now look at the same thing with semantic highlighting. The additional structure splits the code-blobs up, and makes them perfectly readable.
semantic_highlighting2

2. Recognizing errors: When specific elements are always colorized in the same way, for example global items, enumerators, items in the local class, etc., then you will at some point expect them to be highlighted that way, and you will notice errors much earlier in cases the highlighting conflicts what you expect.
Example: See all the items beginning with “m_”, they are highlighted in brown. All class-local items are highlighted in that color. If for example m_quickOpenDataProvider was actually a global object, then it would be highlighted differently, and you’d notice the problem right away(This is most useful with function-calls).
semantic_highlighting1

  • 3. Understand code: The real facility that helps you understanding global code-structure is the navigation-tooltip or the code-browser. However those are not very useful to understand local algorithms. The following picture illustrates my favorite part of the semantic highlighting: Local Variable Colorization. That colorization assigns a semi-unique color to each variable in a local context. This allows much easier distinguishing those variables, largely without reading their full name at all. By freeing you of actually having to read all the variable names, this allows grokking local code relations faster, and has already helped me fixing quite a few very stupid bugs right away. :-)
    semantic_highlighting

    And the best thing about it: You don’t have to use it at all. Today I’ve added the option to completely disable semantic highlighting or local variable colorization all together. Although if you do it, be assured that you will re-enable it after a short time anyway. ;-)

    Development Update
    I’ve fixed tons of bugs, implemented a lot of new code-completion features, improved the internal template-support in general so it once gain works correctly with recent versions of the STL iterators, and most importantly, I’ve done a quite large change to the internal environment-management that makes parsing of large projects much more efficient, and scale a lot better disk-space wise.

    All together, I think I’ve pushed KDevelop4 far enough to be able to use it effectively on my upcoming Diploma Thesis, which gives me a very good feeling, and which represents a kind of milestone, since I won’t be able to put a similar amount of time into KDevelop4 in the next time as I did in the past.

    The C++ support now seems nearly feature complete, and very stable. I really haven’t encountered a non-temporary C++-support/duchain crash for a long time.

    Now we just need to push the debugger and the other lacking parts of KDevelop to come up to the expectations, and we’ll be heading for a very good release. As always, if you want to see this stuff in a stable release soon, consider helping, since some of those other parts really need some love.

  • Next Page »

    The Rubric Theme. Blog at WordPress.com.

    Follow

    Get every new post delivered to your Inbox.