{"payload":{"header_redesign_enabled":false,"results":[{"body":"With this guide in mind, I ll pin up some considerations for this library that may improve performances:\n\n - [ ] 1. Use ChangeDetectionStrategy.OnPush for comparing @Input by reference and not by object content?\n - [ ] 2. According to point 1. trigger layer.draw() only when config actually changes\n https://github.com/ctinnovation/ngx-konva/blob/bb94638e795f5ae05da72de093afc97861284525/projects/ngx-konva/src/lib/components/ko-layer.component.ts#L29\n - [ ] 3. Considering layer.batchDraw()? May be outdated with konva@9\n - [ ] 4. According to this, it may be useful set listening(false) by default and enforce the use of listening\n directives to enable only explicitely shape (or layer) listening.\n https://github.com/ctinnovation/ngx-konva/blob/bb94638e795f5ae05da72de093afc97861284525/projects/ngx-konva/src/lib/components/ko-shape.component.ts#L51\n - [ ] 5. As for point 4, disable by default stroke shadows.\n - [ ] 6. Automatically debouncing layer drawing?\n - [ ] 7. Enforce trackBy function for shapes *ngFor in documentation?\n - [ ] 8. Check considerations on Konva.pixelRatio = 1 (automatically on Retina displays?)\n - [ ] 9. Switch off perfect drawing by default?\n - [ ] 10. Add shape caching for each component?\n","created":"2024-06-12T08:35:50.000Z","hl_text":"With this guide in mind, I ll pin up some considerations for this library that may improve performances:\n\n - [ ] 1. Use ChangeDetectionStrategy.OnPush for comparing @Input by reference and not by ...","hl_title":"Performance considerations","id":"6811317","num_comments":0,"number":32,"repo":{"repository":{"id":706126195,"name":"ngx-konva","owner_id":66022359,"owner_login":"ctinnovation","updated_at":"2024-03-18T09:03:00.729Z","has_issues":true}},"title":"Performance considerations","url":"/ctinnovation/ngx-konva/discussions/32","updated":"2024-06-12T08:35:50.000Z","user_avatar_url":"https://avatars.githubusercontent.com/u/67106822?s=48&v=4","user_id":67106822,"user_login":"giovanni-bertoncelli"},{"body":"Read the announcement here: https://store.steampowered.com/news/app/2519830/view/6168323165072831255\n\nYou can discuss and ask questions here!\n","created":"2024-06-16T00:31:08.000Z","hl_text":"Is there any misconceptions the team wants to dispel about Resonite performance or about the possible performance\nimprovements itself that come to mind?\n","hl_title":"Major Performance Improvements","id":"6823872","num_comments":20,"number":2346,"repo":{"repository":{"id":699955973,"name":"Resonite-Issues","owner_id":109638421,"owner_login":"Yellow-Dog-Man","updated_at":"2024-06-05T02:46:56.667Z","has_issues":true}},"title":"Major Performance Improvements","url":"/Yellow-Dog-Man/Resonite-Issues/discussions/2346","updated":"2024-06-16T18:36:51.000Z","user_avatar_url":"https://avatars.githubusercontent.com/u/8838625?s=48&v=4","user_id":8838625,"user_login":"Frooxius"},{"body":"Dear all,\n\nI manage a software application that relies on custom scripts created by end users to handle real-time data from various\nconnectors. This approach has worked well so far. However, recently, a user has started using a very large script (2500+\nlines), which is beginning to strain the server CPU.\n\nThe script runs approximately 5-10 times per second, leading to increased CPU usage and more frequent garbage\ncollection.\n\nHere s a screenshot of the overall CPU profile for a minute of activity. gopherlua\n\nYou can see that script parsing and compiling take the same amount of time as running the script. Additionally, there s\na significant amount of garbage collection, which would be reduced by 80% if no Lua script were running (I ve already\nrun some tests to confirm this).\n\nI would like to optimize this process if possible. The script is always the same, meaning its source never changes.\nHowever, each custom function implemented returns data dynamically, which is passed before the .DoString(...) function\nis called, based on the incoming data from the inbound connector that my application handles.\n\nIs there a way to reuse the Lua code/VM without going through compilation each time and to generate less garbage to be\ncollected?\n\nAdvice is welcome, thank you!\n","created":"2024-06-10T08:13:22.000Z","hl_text":"Dear all,\n\nI manage a software application that relies on custom scripts created by end users to handle real-time data from various\nconnectors. This approach has worked well so far. However, recently, ...","hl_title":"Performance optimization","id":"6802935","num_comments":0,"number":493,"repo":{"repository":{"id":30828052,"name":"gopher-lua","owner_id":56500,"owner_login":"yuin","updated_at":"2024-06-02T17:15:56.246Z","has_issues":true}},"title":"Performance optimization","url":"/yuin/gopher-lua/discussions/493","updated":"2024-06-10T08:13:23.000Z","user_avatar_url":"https://avatars.githubusercontent.com/u/33295099?s=48&v=4","user_id":33295099,"user_login":"sraimond83"},{"body":"Hello,\n\nI have created a simple federated scenario for image classification, with a non-standard network. When I try the network\nnon-federated, I get fairly good results. When I train it federated, I receive more or less unmodified global weights\nand I don t know what I am doing wrong.\n\nInitially I split my dataset into four distinct sets. With the first split I train the global model, and the other three\nI use for the clients. Each one has a separate distinct split.\n\nI initialize the global model via source_ckpt_file_full_name and set MetaKey.NUM_STEPS_CURRENT_ROUND=1since my data\nsplits (for the clients) are more or less equal.\n\nI use the normal Scatter and Gather approach with InTimeAccumulateWeightedAggregator. After training, I download the\nglobal weights and validate it on all the data. The results are bad, it looks like the global model did not learn\nanything or very very little. The local models themselves show a learning curve when I use them for validation.\n\nUnfortunately, I cannot share code.\n","created":"2024-06-12T20:26:04.000+02:00","hl_text":"Hello,\n\nI have created a simple federated scenario for image classification, with a non-standard network. When I try the network\nnon-federated, I get fairly good results. When I train it federated, I receive ...","hl_title":"Global Model Performance","id":"6813478","num_comments":0,"number":2640,"repo":{"repository":{"id":388876833,"name":"NVFlare","owner_id":1728152,"owner_login":"NVIDIA","updated_at":"2024-06-16T05:11:56.823Z","has_issues":true}},"title":"Global Model Performance","url":"/NVIDIA/NVFlare/discussions/2640","updated":"2024-06-12T20:26:05.000+02:00","user_avatar_url":"https://avatars.githubusercontent.com/u/172475539?s=48&v=4","user_id":172475539,"user_login":"Chucksete"},{"body":"We have now migrated the legacy jetstream subscriptions to ordered consumers, and we are seeing a drastic drop in\nperformance. We went from 8s to receive all messages on multiple stream subjects to 80s. Where would we start with\ndebugging this?\n","created":"2024-06-14T11:58:52.000Z","hl_text":"We have now migrated the legacy jetstream subscriptions to ordered consumers, and we are seeing a drastic drop in\nperformance. We went from 8s to receive all messages on multiple stream subjects to 80s. Where would we start with\ndebugging this?\n","hl_title":"Ordered consumer performance","id":"6820012","num_comments":3,"number":245,"repo":{"repository":{"id":133427026,"name":"nats.ws","owner_id":10203055,"owner_login":"nats-io","updated_at":"2024-05-20T15:27:26.856Z","has_issues":true}},"title":"Ordered consumer performance","url":"/nats-io/nats.ws/discussions/245","updated":"2024-06-14T14:40:14.000Z","user_avatar_url":"https://avatars.githubusercontent.com/u/1202761?s=48&v=4","user_id":1202761,"user_login":"hypeJunction"},{"body":"Noticed that the Earle Philhower Arduino core for RP2040 compiles much faster on Linux machines than on Windows (2x - 3x\nfaster).\n\nWe have a number of older PC s in a classroom setting running Win 10 and the compilation process terribly slow. I can\nrun twice as fast on an 8Gb Raspberry Pi 5 than an older Intel I5 (the classroom pc).\n\nI also tested this out on an I9 laptop with 64Gb running Win 11 and running WSL with Arduino installed on Linux running\nside by side with arduino installed with Windows. Compilation time under Ubuntu under WSL is significantly faster for\nrp2040 as well.\n\n(note: this is just compile times... not uploading)\n\nSo my question is... is there a way to make the Windows compilation time run faster?\n","created":"2024-06-14T18:10:51.000Z","hl_text":"Noticed that the Earle Philhower Arduino core for RP2040 compiles much faster on Linux machines than on Windows (2x - 3x\nfaster).\n\nWe have a number of older PC s in a classroom setting running Win 10 and ...","hl_title":"Windows vs Linux compiler performance","id":"6821129","num_comments":2,"number":2228,"repo":{"repository":{"id":342120988,"name":"arduino-pico","owner_id":11875,"owner_login":"earlephilhower","updated_at":"2024-06-15T14:30:52.935Z","has_issues":true}},"title":"Windows vs Linux compiler performance","url":"/earlephilhower/arduino-pico/discussions/2228","updated":"2024-06-16T20:35:39.000Z","user_avatar_url":"https://avatars.githubusercontent.com/u/2925428?s=48&v=4","user_id":2925428,"user_login":"dennisma"},{"body":"The feedback from the ground is that with mutex group, the robot idling time (waiting for the next follow_new_path) is\nconsiderably slower and quite often receives replan to the same (current) waypoint. I also notice the fleet adapter keep\nlogging the following message: Replanning for [%s] after locking mutexes %s because the external traffic has\nsubstantially changed.\n\nIs there anything in the C++ code we can fine-tune to improve the mutex group performance?\n","created":"2024-06-12T05:47:41.000Z","hl_text":"... ) waypoint. I also notice the fleet adapter keep\nlogging the following message: Replanning for [%s] after locking mutexes %s because the external traffic has\nsubstantially changed.\n\nIs there anything in the C++ code we can fine-tune to improve the mutex group performance?\n","hl_title":"Mutex group performance","id":"6810704","num_comments":1,"number":475,"repo":{"repository":{"id":342367337,"name":"rmf","owner_id":78397594,"owner_login":"open-rmf","updated_at":"2024-06-12T00:41:44.830Z","has_issues":true}},"title":"Mutex group performance","url":"/open-rmf/rmf/discussions/475","updated":"2024-06-12T06:11:52.000Z","user_avatar_url":"https://avatars.githubusercontent.com/u/3224215?s=48&v=4","user_id":3224215,"user_login":"cwrx777"},{"body":"I know the differences in behavior. How does Hydro compare to a normal Razor Pages project in terms of server load? The\ndiffing process sounds computationally expensive.\n","created":"2024-06-15T17:17:43.000-05:00","hl_text":"I know the differences in behavior. How does Hydro compare to a normal Razor Pages project in terms of server load? The\ndiffing process sounds computationally expensive.\n","hl_title":"Hydro VS. Razor Pages - Performance","id":"6823710","num_comments":0,"number":42,"repo":{"repository":{"id":683735341,"name":"hydro","owner_id":143337273,"owner_login":"hydrostack","updated_at":"2024-06-10T10:53:56.504Z","has_issues":true}},"title":"Hydro VS. Razor Pages - Performance","url":"/hydrostack/hydro/discussions/42","updated":"2024-06-15T17:17:43.000-05:00","user_avatar_url":"https://avatars.githubusercontent.com/u/5394990?s=48&v=4","user_id":5394990,"user_login":"orawalters"},{"body":"Are there any numbers comparing the performance of ondemand vs dom frontend? Also I suppose simdjson DOM would still be\nmuch faster than rapidJSON right?\n","created":"2024-06-13T07:19:40.000Z","hl_text":"Are there any numbers comparing the performance of ondemand vs dom frontend? Also I suppose simdjson DOM would still be\nmuch faster than rapidJSON right?\n","hl_title":"ondemand vs dom performance?","id":"6815071","num_comments":1,"number":2201,"repo":{"repository":{"id":126412363,"name":"simdjson","owner_id":62337925,"owner_login":"simdjson","updated_at":"2024-06-11T19:53:53.696Z","has_issues":true}},"title":"ondemand vs dom performance?","url":"/simdjson/simdjson/discussions/2201","updated":"2024-06-13T16:32:00.000Z","user_avatar_url":"https://avatars.githubusercontent.com/u/52783948?s=48&v=4","user_id":52783948,"user_login":"Gun9niR"},{"body":"We re using Kernel Memory as a service to ingest about 9 million text records. It s set up as a service on an Azure App\nService, with Azure Queues, embedding-3-large on Azure OpenAI and Postgres as database. To ingest, we re using an Azure\nFunction that calls the KM Web Service to ingest a document waits until it s ready. It s configured to do max 12 in\nparallel. The current throughput is about 150-200 text records per minute, so the entire data set will take 30-40 days.\n\nInitially I had it running without any throttling, i.e. the Azure Function would just keep ingesting documents, but then\nthe KM service would fall over, for further details see here. That s when I implemented the parallelization. In that\ntopic, batching was discussed, but that isn t ready yet.\n\nApp Service Plan runs at ~10-20% CPU. The main bottleneck seems to be the embedding generation; when I let it go\nunthrottled it went down because of quota in Azure OpenAI.\n\nI ve just been on the Semantic Kernel Office Hours chat and they recommended to reach out here.\n\nIn the interim, is there anything we can do to improve the performance?\n","created":"2024-06-12T23:21:58.000Z","hl_text":"... chat and they recommended to reach out here.\n\nIn the interim, is there anything we can do to improve the performance?\n","hl_title":"Performance for very large dataset","id":"6814127","num_comments":3,"number":663,"repo":{"repository":{"id":666163354,"name":"kernel-memory","owner_id":6154722,"owner_login":"microsoft","updated_at":"2024-06-16T13:59:48.531Z","has_issues":true}},"title":"Performance for very large dataset","url":"/microsoft/kernel-memory/discussions/663","updated":"2024-06-16T07:01:29.000Z","user_avatar_url":"https://avatars.githubusercontent.com/u/37638588?s=48&v=4","user_id":37638588,"user_login":"roldengarm"}],"type":"discussions","page":1,"page_count":100,"elapsed_millis":96,"errors":[],"result_count":59279,"facets":[],"protected_org_logins":[],"topics":null,"query_id":"","logged_in":false,"sign_up_path":"/signup?source=code_search_results","sign_in_path":"/login?return_to=https%3A%2F%2Fgithub.com%2Fsearch%3Fq%3Dis%253Adiscussion%2Bperformance%2Bis%253Aopen%26type%3Ddiscussions","metadata":null},"title":"Discussion search results"}