Jump to content
WnSoft Forums

jt49

Advanced Members
  • Posts

    1,401
  • Joined

  • Last visited

  • Days Won

    10

Everything posted by jt49

  1. Möglichkeit 1: Man kopiert die Show-Blöcke zusammen in ein Projekt. Wichtig dabei ist, dass alle Audio-Clips jeweils an irgendeine Szene (Slide) gebunden sind. In der Slide-List eines Projekts kann man alle Slides markieren und in die Windows-Zwischenablage kopieren und dann im anderen Projekt wieder einfügen. Die gebundenen Audio-Clips werden dann ebenfalls übertragen und haben auch die richtigen Positionen. Möglichkeit 2: Man verkette die Schauen durch Aufrufe: In "Project Options > Advanced" wähle man den Aufruf "Run Slideshow" und trage den Namen der nachfolgenden Schau ein, z.B. Schau1.exe ruft Schau2.exe etc. auf. Legt man alle Exe-Dateien in einen Ordner und startet die erste, dann laufen alle nahtlos nacheinander ab. Wichtig: Alle Exe-Dateien müssen mit derselben Version von PTE erzeugt werden. Gruß jt
  2. http://www.picturestoexe.com/forums/index.php?/topic/17848-complete-interface-change-time-line/#entry118593 http://www.picturestoexe.com/forums/index.php?/topic/17848-complete-interface-change-time-line/#entry118595 Regards, jt
  3. Creating MP4-videos with custom setting, while choosing very small resolutions (e.g. 12x8) produces crashes of x264.exe (H.264-encoder), i.e. PTE does not care for an appropriate input. Regards, jt
  4. I would like to recommend to make your own tests. You may use Audacity or any other audio editor in order to prepare different kinds of audio clips. Define a new PTE project including just one image with an appropriate duration. Insert the audio clips. Then publish the project as a custom HD video (video part at a resolution of 64 by 64). Open the video in Audacity as an audio file, and you will see how PTE handles your audio input. Regards, jt
  5. In the timeline you only see the waveform of the mixed stereo channels, but you hear the sound in stereo! Regards, jt
  6. Sorry JT I don't know what unreflected contribution means This is an obvious killer argument. Hier ein Versuch in deutscher Sprache: Es handelt sich um eine unüberlegte Aktion. Sorry, but I have to say it. If you want to avoid another discussionm (do you mean "discussion"?) why reply, just ignor (do you mean "ignore"?) the posts. Sorry, but I have to say it: We have to take care that in future versions the nonsense of KFSD will not be the only way to go. If it remains just an option, I won't care about it. Regards, jt
  7. Indeed, a nice presentation! Nevertheless we see a rather unreflected contribution. We should not forget that the KFDS feature has been discussed several times since the introduction of version 7 (e.g. here). We still see side effects which are a horror for people who care for a precise synchronization of transition points and music. So there are substantial reasons, not to use KFSD! I hope that we can avoid another discussion. Regards, jt
  8. I have never seen a post like this Regards, jt
  9. http://www.thomann.de/gb/samson_media_one_5a.htm Regards, jt
  10. jt49

    PTE 8

    PTE 8 is not color managed (some future version will be). Prepare your images (or copies of them) according to the color profile that is enforced by your spider system. If this profile differs from sRGB, then sRGB should not the profile of your choice. Regards, jt
  11. Another option: "Ctrl + F11 for scaling up" and "Ctrl + F12 for scaling down". It may be of advantage to study the section on Hotkeys in the Online Help. Regards, jt
  12. PTE provides dynamic blur, that's fine. It has been called natural blur, with the consequence that the edges of a blurred image become transparent, This is a nice feature when dealing with images which do not cover the whole screen. When having images that just cover the screen, the given dynamic blur is awkward, because of the transparency. So you often have to work with appropriate objects in lower layers. It would fine to see a choice for dynamic blur: natural blur (as we have it now), and Photoshop-like blur where the edges remain opaque and keep their original straight shape. Remark: Please don't provide any workarounds! I am well aware of them. Regards, jt
  13. The feature that you might need is part of an advanced speaker support, as provided by other AV software. While running the show in the preview, one monitor (or projector) shows the presentation, while the second monitor shows the timeline with running cursor, and at well-defined points of time on the second monitor windows with text comments are displayed. Regards, jt
  14. This is what I see in Photoshop: Regards, jt
  15. There are always workarounds. Nevertheless it would be fine to have particular features for the mask itself: Perhaps borders, but I would even like more drop shadows (see here). Please do not enter any workarounds for shadows. I am well aware of them Regards, jt
  16. Did you try to create a slide style from your template? When applying a style you do not need to rename images. Regards, jt
  17. Open PTE, change the language to French, and you will notice that our French friends use the word "vue", much better than "slide" Regards, jt
  18. I see the same effect as Mavi does. I wrote "H - H" in LibreOffice (black letters) and PTE (white letters). Compare the results shown in the attached images. Regards, jt
  19. The question is, what do we call easy? Adding a hole (or call it window) to an object would require a lot of additional parameters in O&A (the aspect ratio of the hole/window, one or two zoom factors relative to the object, two coordinates of the hole's center relative to the hole itself, two coordinates of the center relative to the overall object, perhaps an angle). Would you want to see changes of these values on key frames (i.e. would you like to see animations of your hole)? Furthermore: Why having your new feature only for rectangles, why not for images or videos? Why should the hole be restricted to a rectangular form, why not having elliptic holes? But you should keep in mind: We have all these features! You only have to learn some basics on masks! I am sorry to say that the Munich Oktoberfest is just over now. Perhaps you would like to visit it next year. Then take the time for a few extra hours, and I'll show you how to make basic mask constructions. Regards, jt
  20. PTE does not indicate visually what you want to see. PTE does not change the waveforms if you change the volumes via project options or envelopes, and you will not see what happens to the mixed soundtrack. There is a simple workaround to check the final mix: Export the show as a video (perhaps at a low video quality and low resolution in oder to have a quick result). Open the video as an audio clip in Audacity and see if there is any clipping. Regards, jt
  21. The original poster asks for an object with a hole (here called window). We once used constructions of that kind at times when we did not have masks. Now we can easily make use of masks, and the construction provided by Dave in post #5 is not a workaround, it is a (perhaps the) solution. It is much more general as we can apply it also to other objects (images, videos, ..) than rectangles, and we can have different forms of holes. My statement: PTE should not be overloaded with new features that are easy applications of masks. Regards, jt
  22. Gary, please rethink your statement! In one case, PTE was fed with your originals (high resolution images). In the second case, PTE was fed with copies (not the same images) that had been resized with Faststone. In the first case the large images had to be resized to the height of 1080 in order to do video encoding. This resizing was triggered by PTE and it was done using a resizing algorithm we do not know, perhaps a rather fast algorithm. In the second case PTE did not need to trigger resizing, as the input images had already been resized with Faststone. If I remember it correctly, Faststone normally uses the Lanczos algorithm that produces good results while being slow. So it is very likely that your two processes of video encoding were based on copies of your original images that had passed different resizing methods. Different algorithms lead to copies of different sharpness, and video encoding of sharper images may lead to a larger amount of data. Regards, jt
  23. Question: Why should the two video files have equal size? There in no reason for assuming this! In both cases the same encoding process was used, but the input data was different. In one case, the images were resized by PTE, while in the other case the images were resized by an external image editor or image viewer. Hence, the video encoder was fed by two totally different streams of images. Regards, jt
  24. Sometimes it is quite unpleasant in this forum, perhaps a reason why I have reduced my time to spend with it. There are people who are so eager to enter posts even if their contributions are totally evident (as to be seen in this thread), or even knowing that everything has already been said. Others are just negligent, as to be seen here. Regards, jt
×
×
  • Create New...