How to set a really big Windows 10 mouse cursor?

By : Undef

I have windows application (child's game built with Unity) that I want to set a custom cursor for - I only have the binary for the application, it's not my code. The application runs full screen and appears to use the windows cursor, the cursor is too small to be easily seen by children.

I would like to create a much bigger custom mouse cursor from a png and use that in the game. (much bigger than even the windows accessibility cursors)

So far I have tried:

  1. Using the windows 10 control panel to set the cursor for the whole OS to be a 160x160px ico file. Windows then shrinks the image down to a much smaller size (64x64?).

  2. Writing a windows forms app in C# that uses my PNG as a custom cursor using code like:

    IntPtr ptr = myPng.GetHicon();
    myCursor = new Cursor(ptr);
    this.Cursor = myCursor;

This partially works: the cursor is as big as I want, but it only changes the cursor for my application, not the OS. (this is the expected behaviour of these functions).

  1. Using SetSystemCursor from user32.dll in my C# app to set the system wide cursor to the one built from my PNG as in 2. This changes the system wide cursor but is back to being shrunk down by windows to a small size as in 1 with the ico.

So, is what I want to do possible? What approach have I missed?!

By : Undef


You would need to use a 3rd party utility such as MouseChanger, which is freeware available from sourceforge

A parameter's default value is assigned when you omit the parameter entirely. If you provide the parameter but omit the value $null is passed.

Instead of using boolean parameters it's usually better to use switches:

function foo {  
    [string]$b = "bar",

  Write-Host "a: $a`nb: $b`nc: $c"

The value of a switch is automatically $false when omitted and $true when present.

PS C:\> foo -a test -b test -c
a: test
b: test
c: True
PS C:\> foo -a test -b test
a: test
b: test
c: False

You can also explicitly pass a value like this:

PS C:\> foo -a test -b test -c:$true
a: test
b: test
c: True
PS C:\> foo -a test -b test -c:$false
a: test
b: test
c: False

The majority of your concerns seem to boil down to either misuse or misunderstanding.

  • much bigger codesize

    This is usually a result of properly respecting both the Single Responsibility Principle and the Interface Segregation Principle. Is it drastically bigger? I suspect not as large as you claim. However, what it is doing is most likely boiling down classes to specific functionality, rather than having "catch-all" classes that do anything and everything. In most cases this is a sign of healthy separation of concerns, not an issue.

  • ravioli-code instead of spaghetti-code

    Once again, this is most likely causing you to think in stacks instead of hard-to-see dependencies. I think this is a great benefit since it leads to proper abstraction and encapsulation.

  • slower performance Just use a fast container. My favorites are SimpleInjector and LightInject.

  • need to initialize all dependencies in constructor even if the method I want to call has only one dependency

    Once again, this is a sign that you are violating the Single Responsibility Principle. This is a good thing because it is forcing you to logically think through your architecture rather than adding willy-nilly.

  • harder to understand when no IDE is used some errors are pushed to run-time

    If you are STILL not using an IDE, shame on you. There's no good argument for it with modern machines. In addition, some containers (SimpleInjector) will validate on first run if you so choose. You can easily detect this with a simple unit test.

  • adding additional dependency (DI framework itself)

    You have to pick and choose your battles. If the cost of learning a new framework is less than the cost of maintaining spaghetti code (and I suspect it will be), then the cost is justified.

  • new staff have to learn DI first in order to work with it

    If we shy away from new patterns, we never grow. I think of this as an opportunity to enrich and grow your team, not a way to hurt them. In addition, the tradeoff is learning the spaghetti code which might be far more difficult than picking up an industry-wide pattern.

  • a lot of boilerplate code which is bad for creative people (for example copy instances from constructor to properties...)

    This is plain wrong. Mandatory dependencies should always be passed in via the constructor. Only optional dependencies should be set via properties, and that should only be done in very specific circumstances since oftentimes it is violating the Single Responsibility Principle.

  • We do not test the entire codebase, but only certain methods and use real database. So, should Dependency Injection be avoided when no mocking is required for testing?

    I think this might be the biggest misconception of all. Dependency Injection isn't JUST for making testing easier. It is so you can glance at the signature of a class constructor and IMMEDIATELY know what is required to make that class tick. This is impossible with static classes since classes can call both up and down the stack whenever they like without rhyme or reason. Your goal should be to add consistency, clarity, and distinction to your code. This is the single biggest reason to use DI and it is why I highly recommend you revisit it.

By : David L

This video can help you solving your question :)
By: admin