I took a few minutes and sat down at the synth to see if I could make any adjustments that would either eliminate or reduce the phasing/beating effects caused by two oscillators tuned to nearly the same frequency.
Experiment #1: Oscillators in Unison (tuned to zero beat).
With careful tweaking I was able to adjust the pitch of Osc. 3 to Osc.1 so that no phasing occurred (unison tuning or zero beat). Both oscillators were set to the same wave shape and same mixer level. While this was possible, I noticed at least two things: 1, that eventually the two oscillators started going out of tune with each other, and 2, that even though I could tune them exactly, there was no way to control the phase relationship between the two. In other words, one osc.'s phase might be set to 0 degrees, while the other osc. would end up 35 degrees out of phase with the first osc. This is an issue because this means there is no way to determine if in the end one osc. is reinforcing the other or canceling the other.
Experiment #2: Oscillators in Unison with Mixer Differences.
I again tuned them in unison, with the same wave shape, but different mixer levels. The result was that though a reduction of the phasing sounds occurred, in reality all I was really doing was removing the sound of one oscillator, thereby eliminating it and losing the power of oscillator stacking in the resulting sound. This defeats the purpose of using two or more oscillators.
Experiment #3: Oscillators in Unison with Different Wave Shapes.
This yielded a possible result. I again tuned to unison, set the levels to match, and started playing with using different wave shapes. What I discovered was that the only combination of wave shapes that might produce an acceptable result was if one osc. was using a triangle wave and the other used anything but a triangle wave. This eliminated the very apparent phasing artifacts of the high frequency harmonics, but left a low frequency phasing that while not so apparent, was still noticeable.
Conclusions:
1. Perfect unison tuning between oscillators is possible but not reliable, and cannot guarantee that the resulting timbre will be repeatable.
2. Different mixer levels can eliminate the phasing, but defeats the purpose of using more than one oscillator.
3. Different wave shapes can reduce the phasing effects as long as only one oscillator is set to a wave shape that produces high frequency harmonics and the rest are set to a wave shape that does not. However, even this solution still produces low frequency phasing artifacts.
4. I was able to figure this out in 5 minutes at the synth with a little creative experimentation.
5. Part of the fun of using an analog synth is experimenting with it and asking your self, "What if..."
Can I step on my soapbox for just a minute? Feel free to stop reading if you're not interested... I'll understand.

Begin Rant:
A lot of people using computers with software like Ableton, etc. kind of get into a groove of expecting "apparent perfection" in their results. I say "apparent" because the digital domain can only provide an approximation of an analog event, it can never achieve enough resolution to ever exactly model an analog event. Yes, it can come really, really close, but in the end the digitized analog waveform is only a collection of discreet samples taken in different timeslices, with the values of the resulting holes in the overall sample being inferred.
Some people have been conditioned to believe that the computer is the final arbiter of what is correct and what is not. Take the case of the screen shots that Amotz provided. You can lay a ruler across the graphics of the two waves and think that because the wave shapes line up exactly, they must be in perfect tune with each other. The reality is that after a certain point, the softwar stops measuring the accuracy of the source signals, and starts measuring its own limitations! Because of the nature of digital sampling, the software is only as accurate as the sampling rate of the sound file, and the ability of the software to accurately interpret the data in the sound file based upon rules and conditions that were previously coded into the software. These rules also determine how the software is to infer what isn't represented in the data in the sound file. Remember, the software is only processing a dataset of discreet, numeric values. It is an exercise in indirectly inferring reality versus experiencing actual reality. Is it close enough? Of course it is, for the most part. Is it an actual depiction of reality? Not quite, as that which is left out is inferred. Inference of an event is an educated guess at reality based upon events that occur before and after the particular event being measured.
Why the rant? Because too many are learning to rely on tools to depict reality for them. It stifles their creativity and willingness to experiment because they have been conditioned to not rely on their senses or natural ability. I could've spent much time trying to figure out on a computer why Amotz is having this difficulty. I could've analyzed the two sound datasets to see if could find differences between values sampled at the same timeslice, and if I did find differences in the data (if!), I would hopefully then be able to determine the nature of the differences, and then hopefully be able to correlate the results into a meaningful hypothesis of what is occurring.
However, our brains and senses are so exquisitely acute that just listening to the two sounds immediately provides the clues as to what the problem is. Of course, this presumes that the listener has previously learned how to interpret what they're hearing. But then again, the same holds true for anyone attempting to interpret the results provided by the software, and the person who wrote the rules into the software's code.
In the end, I find that if I get my head out of the digital tools, and stop trying to make actual reality align with what the computer says is reality, then my creative juices are flowing, and I am now creating and playing musical instruments, and not a laboratory instrument. I'm also training my senses to accurately interpret events as they happen! Of course your mileage may vary, but feel free to have fun while determining by how much.
Experiment, play, have fun, learn things by the seat of your pants using only your senses, your brain, and your creativity! Learn by doing. Train your senses. Get your head out of the computer!
For those of you still reading, thanks for letting me rant. Feel free to smack me back into reality if need be. I have my fireproof undies on, so it shouldn't be too bad.

End Rant:
I now return you to the existing thread... already in progress.

Bob
Edit:
I have to back pedal a bit...
While what I said above about sampling and software is basically true, it doesn't really reflect what's possible using Fourier Transforms and the like. Indeed, if Amotz is even able to adjust the time scale of his wave displays in Cubase so that only one or two cycles are displayed, he would probably be able to see the phase differences himself, especially if Cubase allows him to display both waves superimposed on each other in the same window.
Anyway, grains of salt are available upon request.

Bob